Mar 12 20:47:41.774628 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 20:47:42.578099 master-0 kubenswrapper[4038]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:47:42.578099 master-0 kubenswrapper[4038]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 20:47:42.578099 master-0 kubenswrapper[4038]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:47:42.578099 master-0 kubenswrapper[4038]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:47:42.579597 master-0 kubenswrapper[4038]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 20:47:42.579597 master-0 kubenswrapper[4038]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:47:42.580870 master-0 kubenswrapper[4038]: I0312 20:47:42.580628 4038 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 20:47:42.589671 master-0 kubenswrapper[4038]: W0312 20:47:42.589595 4038 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:47:42.589671 master-0 kubenswrapper[4038]: W0312 20:47:42.589645 4038 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:47:42.589671 master-0 kubenswrapper[4038]: W0312 20:47:42.589657 4038 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:47:42.589671 master-0 kubenswrapper[4038]: W0312 20:47:42.589668 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:47:42.589671 master-0 kubenswrapper[4038]: W0312 20:47:42.589680 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589690 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589700 4038 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589715 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589725 4038 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589734 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589745 4038 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589755 4038 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589765 4038 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589775 4038 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589784 4038 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589795 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589844 4038 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589862 4038 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589875 4038 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589885 4038 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589897 4038 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589909 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589919 4038 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589930 4038 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:47:42.590274 master-0 kubenswrapper[4038]: W0312 20:47:42.589940 4038 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.589952 4038 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.589963 4038 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.589973 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.589983 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.589993 4038 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590003 4038 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590013 4038 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590023 4038 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590042 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590052 4038 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590061 4038 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590071 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590084 4038 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590093 4038 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590104 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590113 4038 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590123 4038 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590132 4038 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590142 4038 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:47:42.591393 master-0 kubenswrapper[4038]: W0312 20:47:42.590151 4038 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590163 4038 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590173 4038 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590183 4038 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590193 4038 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590203 4038 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590214 4038 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590225 4038 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590235 4038 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590249 4038 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590262 4038 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590273 4038 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590284 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590295 4038 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590310 4038 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590321 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590334 4038 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590348 4038 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590363 4038 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:47:42.592631 master-0 kubenswrapper[4038]: W0312 20:47:42.590375 4038 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590386 4038 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590396 4038 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590407 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590418 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590428 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590441 4038 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590451 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: W0312 20:47:42.590461 4038 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591765 4038 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591800 4038 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591867 4038 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591883 4038 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591898 4038 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591911 4038 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591928 4038 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591942 4038 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591954 4038 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591966 4038 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591982 4038 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.591996 4038 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.592008 4038 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 20:47:42.593996 master-0 kubenswrapper[4038]: I0312 20:47:42.592020 4038 flags.go:64] FLAG: --cgroup-root="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592031 4038 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592043 4038 flags.go:64] FLAG: --client-ca-file="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592055 4038 flags.go:64] FLAG: --cloud-config="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592066 4038 flags.go:64] FLAG: --cloud-provider="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592078 4038 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592093 4038 flags.go:64] FLAG: --cluster-domain="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592104 4038 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592116 4038 flags.go:64] FLAG: --config-dir="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592128 4038 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592140 4038 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592156 4038 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592168 4038 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592180 4038 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592194 4038 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592205 4038 flags.go:64] FLAG: --contention-profiling="false" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592218 4038 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592229 4038 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592241 4038 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592253 4038 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592268 4038 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592280 4038 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592292 4038 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592302 4038 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592315 4038 flags.go:64] FLAG: --enable-server="true" Mar 12 20:47:42.595446 master-0 kubenswrapper[4038]: I0312 20:47:42.592326 4038 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592341 4038 flags.go:64] FLAG: --event-burst="100" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592353 4038 flags.go:64] FLAG: --event-qps="50" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592365 4038 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592377 4038 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592389 4038 flags.go:64] FLAG: --eviction-hard="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592403 4038 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592414 4038 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592429 4038 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592442 4038 flags.go:64] FLAG: --eviction-soft="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592454 4038 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592465 4038 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592476 4038 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592488 4038 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592499 4038 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592511 4038 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592522 4038 flags.go:64] FLAG: --feature-gates="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592546 4038 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592558 4038 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592570 4038 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592583 4038 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592595 4038 flags.go:64] FLAG: --healthz-port="10248" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592607 4038 flags.go:64] FLAG: --help="false" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592619 4038 flags.go:64] FLAG: --hostname-override="" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592631 4038 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592643 4038 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 20:47:42.596968 master-0 kubenswrapper[4038]: I0312 20:47:42.592655 4038 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592666 4038 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592676 4038 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592688 4038 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592699 4038 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592710 4038 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592721 4038 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592733 4038 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592746 4038 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592758 4038 flags.go:64] FLAG: --kube-reserved="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592779 4038 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592790 4038 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592803 4038 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592857 4038 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592869 4038 flags.go:64] FLAG: --lock-file="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592881 4038 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592892 4038 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592904 4038 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592927 4038 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592938 4038 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592950 4038 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592962 4038 flags.go:64] FLAG: --logging-format="text" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592973 4038 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592985 4038 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.592999 4038 flags.go:64] FLAG: --manifest-url="" Mar 12 20:47:42.598326 master-0 kubenswrapper[4038]: I0312 20:47:42.593010 4038 flags.go:64] FLAG: --manifest-url-header="" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593025 4038 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593037 4038 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593051 4038 flags.go:64] FLAG: --max-pods="110" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593069 4038 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593081 4038 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593092 4038 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593104 4038 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593116 4038 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593127 4038 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593140 4038 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593169 4038 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593181 4038 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593193 4038 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593205 4038 flags.go:64] FLAG: --pod-cidr="" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593216 4038 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593234 4038 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593250 4038 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593262 4038 flags.go:64] FLAG: --pods-per-core="0" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593273 4038 flags.go:64] FLAG: --port="10250" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593285 4038 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593297 4038 flags.go:64] FLAG: --provider-id="" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593309 4038 flags.go:64] FLAG: --qos-reserved="" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593321 4038 flags.go:64] FLAG: --read-only-port="10255" Mar 12 20:47:42.599682 master-0 kubenswrapper[4038]: I0312 20:47:42.593334 4038 flags.go:64] FLAG: --register-node="true" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593346 4038 flags.go:64] FLAG: --register-schedulable="true" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593360 4038 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593381 4038 flags.go:64] FLAG: --registry-burst="10" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593393 4038 flags.go:64] FLAG: --registry-qps="5" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593410 4038 flags.go:64] FLAG: --reserved-cpus="" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593422 4038 flags.go:64] FLAG: --reserved-memory="" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593438 4038 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593450 4038 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593462 4038 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593473 4038 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593485 4038 flags.go:64] FLAG: --runonce="false" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593497 4038 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593514 4038 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593526 4038 flags.go:64] FLAG: --seccomp-default="false" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593538 4038 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593549 4038 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593562 4038 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593575 4038 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593587 4038 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593599 4038 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593611 4038 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593623 4038 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593634 4038 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593647 4038 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 20:47:42.600794 master-0 kubenswrapper[4038]: I0312 20:47:42.593663 4038 flags.go:64] FLAG: --system-cgroups="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593675 4038 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593694 4038 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593705 4038 flags.go:64] FLAG: --tls-cert-file="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593717 4038 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593736 4038 flags.go:64] FLAG: --tls-min-version="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593747 4038 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593758 4038 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593770 4038 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593782 4038 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593794 4038 flags.go:64] FLAG: --v="2" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593863 4038 flags.go:64] FLAG: --version="false" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593880 4038 flags.go:64] FLAG: --vmodule="" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593895 4038 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: I0312 20:47:42.593908 4038 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594192 4038 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594212 4038 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594224 4038 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594236 4038 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594245 4038 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594260 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594271 4038 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594281 4038 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:47:42.602057 master-0 kubenswrapper[4038]: W0312 20:47:42.594292 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594302 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594312 4038 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594322 4038 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594332 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594342 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594356 4038 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594369 4038 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594382 4038 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594397 4038 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594408 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594420 4038 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594431 4038 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594443 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594457 4038 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594469 4038 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594481 4038 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594491 4038 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594501 4038 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:47:42.603149 master-0 kubenswrapper[4038]: W0312 20:47:42.594511 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594521 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594530 4038 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594541 4038 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594551 4038 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594560 4038 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594570 4038 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594580 4038 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594591 4038 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594601 4038 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594617 4038 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594626 4038 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594640 4038 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594654 4038 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594665 4038 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594676 4038 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594686 4038 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594696 4038 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594707 4038 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:47:42.604090 master-0 kubenswrapper[4038]: W0312 20:47:42.594716 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594726 4038 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594736 4038 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594750 4038 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594760 4038 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594770 4038 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594780 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594790 4038 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594799 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594849 4038 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594864 4038 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594876 4038 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594886 4038 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594897 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594907 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594917 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594927 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594937 4038 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594947 4038 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594956 4038 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:47:42.605154 master-0 kubenswrapper[4038]: W0312 20:47:42.594967 4038 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:47:42.606119 master-0 kubenswrapper[4038]: W0312 20:47:42.594978 4038 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:47:42.606119 master-0 kubenswrapper[4038]: W0312 20:47:42.594988 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:47:42.606119 master-0 kubenswrapper[4038]: W0312 20:47:42.595007 4038 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:47:42.606119 master-0 kubenswrapper[4038]: W0312 20:47:42.595018 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:47:42.606119 master-0 kubenswrapper[4038]: W0312 20:47:42.595028 4038 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:47:42.606119 master-0 kubenswrapper[4038]: I0312 20:47:42.595061 4038 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 20:47:42.608115 master-0 kubenswrapper[4038]: I0312 20:47:42.608060 4038 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 20:47:42.608115 master-0 kubenswrapper[4038]: I0312 20:47:42.608104 4038 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 20:47:42.608266 master-0 kubenswrapper[4038]: W0312 20:47:42.608239 4038 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:47:42.608266 master-0 kubenswrapper[4038]: W0312 20:47:42.608258 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:47:42.608266 master-0 kubenswrapper[4038]: W0312 20:47:42.608268 4038 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608277 4038 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608286 4038 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608294 4038 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608304 4038 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608313 4038 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608322 4038 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608331 4038 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608339 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608348 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608356 4038 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608365 4038 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608374 4038 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608382 4038 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608389 4038 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608398 4038 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608405 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608413 4038 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608421 4038 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608429 4038 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:47:42.608428 master-0 kubenswrapper[4038]: W0312 20:47:42.608437 4038 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608445 4038 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608476 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608484 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608492 4038 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608500 4038 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608508 4038 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608516 4038 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608524 4038 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608532 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608540 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608548 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608569 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608577 4038 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608585 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608595 4038 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608604 4038 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608612 4038 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608620 4038 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608629 4038 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:47:42.609366 master-0 kubenswrapper[4038]: W0312 20:47:42.608637 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608646 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608654 4038 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608662 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608672 4038 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608684 4038 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608693 4038 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608702 4038 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608711 4038 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608720 4038 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608728 4038 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608738 4038 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608751 4038 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608763 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608776 4038 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608787 4038 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608796 4038 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608812 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608844 4038 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:47:42.610573 master-0 kubenswrapper[4038]: W0312 20:47:42.608852 4038 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608859 4038 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608869 4038 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608880 4038 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608888 4038 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608896 4038 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608905 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608912 4038 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608921 4038 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608936 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.608947 4038 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: I0312 20:47:42.608959 4038 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.609187 4038 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.609199 4038 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:47:42.611741 master-0 kubenswrapper[4038]: W0312 20:47:42.609207 4038 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609216 4038 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609224 4038 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609232 4038 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609240 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609248 4038 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609256 4038 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609264 4038 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609272 4038 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609280 4038 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609288 4038 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609295 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609303 4038 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609311 4038 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609320 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609329 4038 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609340 4038 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609352 4038 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609361 4038 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:47:42.612898 master-0 kubenswrapper[4038]: W0312 20:47:42.609371 4038 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609381 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609391 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609400 4038 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609407 4038 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609416 4038 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609424 4038 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609432 4038 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609440 4038 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609448 4038 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609455 4038 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609463 4038 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609473 4038 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609481 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609490 4038 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609498 4038 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609506 4038 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609514 4038 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609522 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609529 4038 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:47:42.613785 master-0 kubenswrapper[4038]: W0312 20:47:42.609537 4038 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609545 4038 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609555 4038 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609564 4038 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609573 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609582 4038 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609592 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609602 4038 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609610 4038 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609620 4038 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609630 4038 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609639 4038 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609647 4038 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609655 4038 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609663 4038 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609670 4038 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609678 4038 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609686 4038 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609694 4038 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:47:42.614786 master-0 kubenswrapper[4038]: W0312 20:47:42.609702 4038 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609709 4038 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609718 4038 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609726 4038 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609733 4038 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609741 4038 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609749 4038 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609756 4038 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609764 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609773 4038 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609780 4038 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: W0312 20:47:42.609788 4038 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: I0312 20:47:42.609800 4038 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: I0312 20:47:42.610129 4038 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 20:47:42.615746 master-0 kubenswrapper[4038]: I0312 20:47:42.614205 4038 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 20:47:42.616456 master-0 kubenswrapper[4038]: I0312 20:47:42.615780 4038 server.go:997] "Starting client certificate rotation" Mar 12 20:47:42.616456 master-0 kubenswrapper[4038]: I0312 20:47:42.615811 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 20:47:42.616456 master-0 kubenswrapper[4038]: I0312 20:47:42.616040 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 20:47:42.648549 master-0 kubenswrapper[4038]: I0312 20:47:42.648477 4038 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 20:47:42.653243 master-0 kubenswrapper[4038]: I0312 20:47:42.653130 4038 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 20:47:42.655470 master-0 kubenswrapper[4038]: E0312 20:47:42.655414 4038 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:42.677566 master-0 kubenswrapper[4038]: I0312 20:47:42.677477 4038 log.go:25] "Validated CRI v1 runtime API" Mar 12 20:47:42.684250 master-0 kubenswrapper[4038]: I0312 20:47:42.684201 4038 log.go:25] "Validated CRI v1 image API" Mar 12 20:47:42.686454 master-0 kubenswrapper[4038]: I0312 20:47:42.686421 4038 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 20:47:42.690189 master-0 kubenswrapper[4038]: I0312 20:47:42.690136 4038 fs.go:135] Filesystem UUIDs: map[6486df99-a83a-4de4-8a94-6816f327ffeb:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 12 20:47:42.690189 master-0 kubenswrapper[4038]: I0312 20:47:42.690168 4038 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 12 20:47:42.705160 master-0 kubenswrapper[4038]: I0312 20:47:42.704866 4038 manager.go:217] Machine: {Timestamp:2026-03-12 20:47:42.702567933 +0000 UTC m=+0.738249816 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ab6ae3a9e07f4bbcb7f4f9a490c6dc9c SystemUUID:ab6ae3a9-e07f-4bbc-b7f4-f9a490c6dc9c BootID:a78965b5-30ee-4294-b02c-530634422611 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:f6:7e:a8 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:36:1f:bb Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:c6:09:84:5c:c2:5e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 20:47:42.705160 master-0 kubenswrapper[4038]: I0312 20:47:42.705118 4038 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 20:47:42.705405 master-0 kubenswrapper[4038]: I0312 20:47:42.705288 4038 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 20:47:42.707108 master-0 kubenswrapper[4038]: I0312 20:47:42.707047 4038 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 20:47:42.707434 master-0 kubenswrapper[4038]: I0312 20:47:42.707371 4038 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 20:47:42.707765 master-0 kubenswrapper[4038]: I0312 20:47:42.707424 4038 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 20:47:42.707909 master-0 kubenswrapper[4038]: I0312 20:47:42.707779 4038 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 20:47:42.707909 master-0 kubenswrapper[4038]: I0312 20:47:42.707796 4038 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 20:47:42.708014 master-0 kubenswrapper[4038]: I0312 20:47:42.707930 4038 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 20:47:42.708014 master-0 kubenswrapper[4038]: I0312 20:47:42.707968 4038 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 20:47:42.708224 master-0 kubenswrapper[4038]: I0312 20:47:42.708165 4038 state_mem.go:36] "Initialized new in-memory state store" Mar 12 20:47:42.708363 master-0 kubenswrapper[4038]: I0312 20:47:42.708326 4038 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 20:47:42.712140 master-0 kubenswrapper[4038]: I0312 20:47:42.712097 4038 kubelet.go:418] "Attempting to sync node with API server" Mar 12 20:47:42.712140 master-0 kubenswrapper[4038]: I0312 20:47:42.712132 4038 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 20:47:42.712279 master-0 kubenswrapper[4038]: I0312 20:47:42.712168 4038 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 20:47:42.712279 master-0 kubenswrapper[4038]: I0312 20:47:42.712186 4038 kubelet.go:324] "Adding apiserver pod source" Mar 12 20:47:42.712279 master-0 kubenswrapper[4038]: I0312 20:47:42.712200 4038 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 20:47:42.718290 master-0 kubenswrapper[4038]: I0312 20:47:42.718119 4038 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 20:47:42.719267 master-0 kubenswrapper[4038]: W0312 20:47:42.719133 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:42.719267 master-0 kubenswrapper[4038]: E0312 20:47:42.719241 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:42.719575 master-0 kubenswrapper[4038]: W0312 20:47:42.719240 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:42.719575 master-0 kubenswrapper[4038]: E0312 20:47:42.719405 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:42.722414 master-0 kubenswrapper[4038]: I0312 20:47:42.722351 4038 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 20:47:42.722907 master-0 kubenswrapper[4038]: I0312 20:47:42.722877 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 20:47:42.722907 master-0 kubenswrapper[4038]: I0312 20:47:42.722903 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 20:47:42.722907 master-0 kubenswrapper[4038]: I0312 20:47:42.722912 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.722958 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.722967 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.722974 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.722980 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.722986 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.722996 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.723003 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.723014 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 20:47:42.723068 master-0 kubenswrapper[4038]: I0312 20:47:42.723034 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 20:47:42.725301 master-0 kubenswrapper[4038]: I0312 20:47:42.725249 4038 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 20:47:42.726624 master-0 kubenswrapper[4038]: I0312 20:47:42.726590 4038 server.go:1280] "Started kubelet" Mar 12 20:47:42.727975 master-0 kubenswrapper[4038]: I0312 20:47:42.727784 4038 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 20:47:42.728315 master-0 kubenswrapper[4038]: I0312 20:47:42.728048 4038 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 20:47:42.728315 master-0 kubenswrapper[4038]: I0312 20:47:42.727894 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:42.728315 master-0 kubenswrapper[4038]: I0312 20:47:42.728246 4038 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 20:47:42.728957 master-0 kubenswrapper[4038]: I0312 20:47:42.728909 4038 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 20:47:42.729256 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 20:47:42.736003 master-0 kubenswrapper[4038]: I0312 20:47:42.735395 4038 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 20:47:42.736003 master-0 kubenswrapper[4038]: I0312 20:47:42.735447 4038 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 20:47:42.736003 master-0 kubenswrapper[4038]: I0312 20:47:42.735737 4038 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 20:47:42.736003 master-0 kubenswrapper[4038]: I0312 20:47:42.735766 4038 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 20:47:42.736003 master-0 kubenswrapper[4038]: I0312 20:47:42.735868 4038 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 20:47:42.736475 master-0 kubenswrapper[4038]: E0312 20:47:42.736034 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:47:42.736475 master-0 kubenswrapper[4038]: I0312 20:47:42.736195 4038 reconstruct.go:97] "Volume reconstruction finished" Mar 12 20:47:42.736475 master-0 kubenswrapper[4038]: I0312 20:47:42.736219 4038 reconciler.go:26] "Reconciler: start to sync state" Mar 12 20:47:42.737154 master-0 kubenswrapper[4038]: I0312 20:47:42.737087 4038 server.go:449] "Adding debug handlers to kubelet server" Mar 12 20:47:42.738012 master-0 kubenswrapper[4038]: W0312 20:47:42.737828 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:42.738012 master-0 kubenswrapper[4038]: E0312 20:47:42.737889 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 20:47:42.738012 master-0 kubenswrapper[4038]: E0312 20:47:42.737917 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:42.739436 master-0 kubenswrapper[4038]: E0312 20:47:42.737888 4038 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c3307ffccbc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.726528096 +0000 UTC m=+0.762210029,LastTimestamp:2026-03-12 20:47:42.726528096 +0000 UTC m=+0.762210029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:42.745389 master-0 kubenswrapper[4038]: I0312 20:47:42.745335 4038 factory.go:55] Registering systemd factory Mar 12 20:47:42.745389 master-0 kubenswrapper[4038]: I0312 20:47:42.745386 4038 factory.go:221] Registration of the systemd container factory successfully Mar 12 20:47:42.745988 master-0 kubenswrapper[4038]: I0312 20:47:42.745948 4038 factory.go:153] Registering CRI-O factory Mar 12 20:47:42.745988 master-0 kubenswrapper[4038]: I0312 20:47:42.745982 4038 factory.go:221] Registration of the crio container factory successfully Mar 12 20:47:42.746180 master-0 kubenswrapper[4038]: I0312 20:47:42.746105 4038 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 20:47:42.746180 master-0 kubenswrapper[4038]: I0312 20:47:42.746163 4038 factory.go:103] Registering Raw factory Mar 12 20:47:42.746361 master-0 kubenswrapper[4038]: I0312 20:47:42.746212 4038 manager.go:1196] Started watching for new ooms in manager Mar 12 20:47:42.747692 master-0 kubenswrapper[4038]: I0312 20:47:42.747642 4038 manager.go:319] Starting recovery of all containers Mar 12 20:47:42.749282 master-0 kubenswrapper[4038]: E0312 20:47:42.749215 4038 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 20:47:42.781690 master-0 kubenswrapper[4038]: I0312 20:47:42.781277 4038 manager.go:324] Recovery completed Mar 12 20:47:42.796086 master-0 kubenswrapper[4038]: I0312 20:47:42.796021 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.798479 master-0 kubenswrapper[4038]: I0312 20:47:42.798445 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.798559 master-0 kubenswrapper[4038]: I0312 20:47:42.798492 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.798559 master-0 kubenswrapper[4038]: I0312 20:47:42.798502 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.799518 master-0 kubenswrapper[4038]: I0312 20:47:42.799491 4038 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 20:47:42.799518 master-0 kubenswrapper[4038]: I0312 20:47:42.799513 4038 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 20:47:42.799614 master-0 kubenswrapper[4038]: I0312 20:47:42.799537 4038 state_mem.go:36] "Initialized new in-memory state store" Mar 12 20:47:42.832061 master-0 kubenswrapper[4038]: I0312 20:47:42.831936 4038 policy_none.go:49] "None policy: Start" Mar 12 20:47:42.833887 master-0 kubenswrapper[4038]: I0312 20:47:42.833802 4038 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 20:47:42.833952 master-0 kubenswrapper[4038]: I0312 20:47:42.833904 4038 state_mem.go:35] "Initializing new in-memory state store" Mar 12 20:47:42.836322 master-0 kubenswrapper[4038]: E0312 20:47:42.836258 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: I0312 20:47:42.875919 4038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: I0312 20:47:42.878618 4038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: I0312 20:47:42.878735 4038 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: I0312 20:47:42.878781 4038 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: E0312 20:47:42.878909 4038 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: W0312 20:47:42.881741 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:42.883422 master-0 kubenswrapper[4038]: E0312 20:47:42.881912 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:42.922324 master-0 kubenswrapper[4038]: I0312 20:47:42.922248 4038 manager.go:334] "Starting Device Plugin manager" Mar 12 20:47:42.922566 master-0 kubenswrapper[4038]: I0312 20:47:42.922371 4038 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 20:47:42.922566 master-0 kubenswrapper[4038]: I0312 20:47:42.922392 4038 server.go:79] "Starting device plugin registration server" Mar 12 20:47:42.923043 master-0 kubenswrapper[4038]: I0312 20:47:42.922999 4038 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 20:47:42.923123 master-0 kubenswrapper[4038]: I0312 20:47:42.923033 4038 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 20:47:42.923903 master-0 kubenswrapper[4038]: I0312 20:47:42.923597 4038 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 20:47:42.923903 master-0 kubenswrapper[4038]: I0312 20:47:42.923720 4038 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 20:47:42.923903 master-0 kubenswrapper[4038]: I0312 20:47:42.923733 4038 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 20:47:42.925135 master-0 kubenswrapper[4038]: E0312 20:47:42.925085 4038 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 20:47:42.940065 master-0 kubenswrapper[4038]: E0312 20:47:42.940019 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 20:47:42.979256 master-0 kubenswrapper[4038]: I0312 20:47:42.979140 4038 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 20:47:42.979537 master-0 kubenswrapper[4038]: I0312 20:47:42.979332 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.980852 master-0 kubenswrapper[4038]: I0312 20:47:42.980798 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.980921 master-0 kubenswrapper[4038]: I0312 20:47:42.980874 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.980921 master-0 kubenswrapper[4038]: I0312 20:47:42.980891 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.981096 master-0 kubenswrapper[4038]: I0312 20:47:42.981068 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.981355 master-0 kubenswrapper[4038]: I0312 20:47:42.981259 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:42.981355 master-0 kubenswrapper[4038]: I0312 20:47:42.981296 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.982092 master-0 kubenswrapper[4038]: I0312 20:47:42.982049 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.982162 master-0 kubenswrapper[4038]: I0312 20:47:42.982117 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.982162 master-0 kubenswrapper[4038]: I0312 20:47:42.982135 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.982369 master-0 kubenswrapper[4038]: I0312 20:47:42.982326 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.982369 master-0 kubenswrapper[4038]: I0312 20:47:42.982334 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.982467 master-0 kubenswrapper[4038]: I0312 20:47:42.982435 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:42.982467 master-0 kubenswrapper[4038]: I0312 20:47:42.982350 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.982542 master-0 kubenswrapper[4038]: I0312 20:47:42.982477 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.982542 master-0 kubenswrapper[4038]: I0312 20:47:42.982482 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.983340 master-0 kubenswrapper[4038]: I0312 20:47:42.983284 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.983426 master-0 kubenswrapper[4038]: I0312 20:47:42.983342 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.983426 master-0 kubenswrapper[4038]: I0312 20:47:42.983364 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.983548 master-0 kubenswrapper[4038]: I0312 20:47:42.983533 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.983743 master-0 kubenswrapper[4038]: I0312 20:47:42.983657 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:42.983743 master-0 kubenswrapper[4038]: I0312 20:47:42.983710 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.984082 master-0 kubenswrapper[4038]: I0312 20:47:42.983909 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.984082 master-0 kubenswrapper[4038]: I0312 20:47:42.983942 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.984082 master-0 kubenswrapper[4038]: I0312 20:47:42.983954 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.984538 master-0 kubenswrapper[4038]: I0312 20:47:42.984501 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.984538 master-0 kubenswrapper[4038]: I0312 20:47:42.984536 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.984646 master-0 kubenswrapper[4038]: I0312 20:47:42.984552 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.984646 master-0 kubenswrapper[4038]: I0312 20:47:42.984616 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.984701 master-0 kubenswrapper[4038]: I0312 20:47:42.984655 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.984701 master-0 kubenswrapper[4038]: I0312 20:47:42.984676 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.984942 master-0 kubenswrapper[4038]: I0312 20:47:42.984915 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.985134 master-0 kubenswrapper[4038]: I0312 20:47:42.985102 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:42.985179 master-0 kubenswrapper[4038]: I0312 20:47:42.985166 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.986229 master-0 kubenswrapper[4038]: I0312 20:47:42.986195 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.986362 master-0 kubenswrapper[4038]: I0312 20:47:42.986251 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.986362 master-0 kubenswrapper[4038]: I0312 20:47:42.986202 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.986362 master-0 kubenswrapper[4038]: I0312 20:47:42.986273 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.986362 master-0 kubenswrapper[4038]: I0312 20:47:42.986296 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.986362 master-0 kubenswrapper[4038]: I0312 20:47:42.986321 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:42.986517 master-0 kubenswrapper[4038]: I0312 20:47:42.986499 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:42.986599 master-0 kubenswrapper[4038]: I0312 20:47:42.986544 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:42.987568 master-0 kubenswrapper[4038]: I0312 20:47:42.987489 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:42.987568 master-0 kubenswrapper[4038]: I0312 20:47:42.987510 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:42.987568 master-0 kubenswrapper[4038]: I0312 20:47:42.987525 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:43.023748 master-0 kubenswrapper[4038]: I0312 20:47:43.023676 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:43.025105 master-0 kubenswrapper[4038]: I0312 20:47:43.025071 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:43.025173 master-0 kubenswrapper[4038]: I0312 20:47:43.025139 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:43.025173 master-0 kubenswrapper[4038]: I0312 20:47:43.025158 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:43.025260 master-0 kubenswrapper[4038]: I0312 20:47:43.025228 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:43.027289 master-0 kubenswrapper[4038]: E0312 20:47:43.027217 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 20:47:43.037698 master-0 kubenswrapper[4038]: I0312 20:47:43.037635 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.037897 master-0 kubenswrapper[4038]: I0312 20:47:43.037705 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.037897 master-0 kubenswrapper[4038]: I0312 20:47:43.037750 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.037985 master-0 kubenswrapper[4038]: I0312 20:47:43.037882 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.037985 master-0 kubenswrapper[4038]: I0312 20:47:43.037949 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.038066 master-0 kubenswrapper[4038]: I0312 20:47:43.037986 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.038066 master-0 kubenswrapper[4038]: I0312 20:47:43.038019 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.038157 master-0 kubenswrapper[4038]: I0312 20:47:43.038093 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.038157 master-0 kubenswrapper[4038]: I0312 20:47:43.038141 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.038241 master-0 kubenswrapper[4038]: I0312 20:47:43.038178 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.038284 master-0 kubenswrapper[4038]: I0312 20:47:43.038209 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.038393 master-0 kubenswrapper[4038]: I0312 20:47:43.038336 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.038447 master-0 kubenswrapper[4038]: I0312 20:47:43.038396 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.038447 master-0 kubenswrapper[4038]: I0312 20:47:43.038430 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.038526 master-0 kubenswrapper[4038]: I0312 20:47:43.038462 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.038570 master-0 kubenswrapper[4038]: I0312 20:47:43.038527 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.038617 master-0 kubenswrapper[4038]: I0312 20:47:43.038581 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.139451 master-0 kubenswrapper[4038]: I0312 20:47:43.139384 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.139599 master-0 kubenswrapper[4038]: I0312 20:47:43.139478 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.139651 master-0 kubenswrapper[4038]: I0312 20:47:43.139597 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.139742 master-0 kubenswrapper[4038]: I0312 20:47:43.139695 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.139942 master-0 kubenswrapper[4038]: I0312 20:47:43.139864 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.140007 master-0 kubenswrapper[4038]: I0312 20:47:43.139978 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.140048 master-0 kubenswrapper[4038]: I0312 20:47:43.140022 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140086 master-0 kubenswrapper[4038]: I0312 20:47:43.139886 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.140086 master-0 kubenswrapper[4038]: I0312 20:47:43.140052 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140159 master-0 kubenswrapper[4038]: I0312 20:47:43.140115 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140159 master-0 kubenswrapper[4038]: I0312 20:47:43.140117 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140159 master-0 kubenswrapper[4038]: I0312 20:47:43.140136 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.140269 master-0 kubenswrapper[4038]: I0312 20:47:43.140180 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140269 master-0 kubenswrapper[4038]: I0312 20:47:43.140186 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140269 master-0 kubenswrapper[4038]: I0312 20:47:43.140220 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140269 master-0 kubenswrapper[4038]: I0312 20:47:43.140232 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.140269 master-0 kubenswrapper[4038]: I0312 20:47:43.140239 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140269 master-0 kubenswrapper[4038]: I0312 20:47:43.140266 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.140464 master-0 kubenswrapper[4038]: I0312 20:47:43.140295 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140464 master-0 kubenswrapper[4038]: I0312 20:47:43.140331 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.140464 master-0 kubenswrapper[4038]: I0312 20:47:43.140389 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140464 master-0 kubenswrapper[4038]: I0312 20:47:43.140419 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140569 master-0 kubenswrapper[4038]: I0312 20:47:43.140380 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.140569 master-0 kubenswrapper[4038]: I0312 20:47:43.140467 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140569 master-0 kubenswrapper[4038]: I0312 20:47:43.140479 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140569 master-0 kubenswrapper[4038]: I0312 20:47:43.140519 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140569 master-0 kubenswrapper[4038]: I0312 20:47:43.140535 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140724 master-0 kubenswrapper[4038]: I0312 20:47:43.140581 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.140724 master-0 kubenswrapper[4038]: I0312 20:47:43.140625 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140781 master-0 kubenswrapper[4038]: I0312 20:47:43.140724 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140781 master-0 kubenswrapper[4038]: I0312 20:47:43.140734 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.140868 master-0 kubenswrapper[4038]: I0312 20:47:43.140780 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.140868 master-0 kubenswrapper[4038]: I0312 20:47:43.140801 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.140964 master-0 kubenswrapper[4038]: I0312 20:47:43.140877 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.228008 master-0 kubenswrapper[4038]: I0312 20:47:43.227909 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:43.229498 master-0 kubenswrapper[4038]: I0312 20:47:43.229462 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:43.229580 master-0 kubenswrapper[4038]: I0312 20:47:43.229528 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:43.229580 master-0 kubenswrapper[4038]: I0312 20:47:43.229553 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:43.229668 master-0 kubenswrapper[4038]: I0312 20:47:43.229633 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:43.230936 master-0 kubenswrapper[4038]: E0312 20:47:43.230879 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 20:47:43.320719 master-0 kubenswrapper[4038]: I0312 20:47:43.320624 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:47:43.342351 master-0 kubenswrapper[4038]: E0312 20:47:43.342250 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 20:47:43.361574 master-0 kubenswrapper[4038]: I0312 20:47:43.361392 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:47:43.388188 master-0 kubenswrapper[4038]: I0312 20:47:43.388086 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:43.404037 master-0 kubenswrapper[4038]: I0312 20:47:43.403948 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:43.413021 master-0 kubenswrapper[4038]: I0312 20:47:43.412959 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:47:43.631891 master-0 kubenswrapper[4038]: I0312 20:47:43.631642 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:43.633382 master-0 kubenswrapper[4038]: I0312 20:47:43.633324 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:43.633382 master-0 kubenswrapper[4038]: I0312 20:47:43.633385 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:43.633516 master-0 kubenswrapper[4038]: I0312 20:47:43.633403 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:43.633516 master-0 kubenswrapper[4038]: I0312 20:47:43.633479 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:43.634750 master-0 kubenswrapper[4038]: E0312 20:47:43.634688 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 20:47:43.730223 master-0 kubenswrapper[4038]: I0312 20:47:43.730084 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:44.000062 master-0 kubenswrapper[4038]: W0312 20:47:43.999774 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:44.000062 master-0 kubenswrapper[4038]: E0312 20:47:43.999925 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:44.037679 master-0 kubenswrapper[4038]: W0312 20:47:44.037574 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52 WatchSource:0}: Error finding container 2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52: Status 404 returned error can't find the container with id 2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52 Mar 12 20:47:44.038315 master-0 kubenswrapper[4038]: W0312 20:47:44.038234 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42 WatchSource:0}: Error finding container a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42: Status 404 returned error can't find the container with id a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42 Mar 12 20:47:44.050906 master-0 kubenswrapper[4038]: I0312 20:47:44.050660 4038 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 20:47:44.065137 master-0 kubenswrapper[4038]: W0312 20:47:44.064994 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7 WatchSource:0}: Error finding container bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7: Status 404 returned error can't find the container with id bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7 Mar 12 20:47:44.088237 master-0 kubenswrapper[4038]: W0312 20:47:44.088120 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e WatchSource:0}: Error finding container 565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e: Status 404 returned error can't find the container with id 565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e Mar 12 20:47:44.128195 master-0 kubenswrapper[4038]: W0312 20:47:44.128123 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b WatchSource:0}: Error finding container 1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b: Status 404 returned error can't find the container with id 1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b Mar 12 20:47:44.144372 master-0 kubenswrapper[4038]: E0312 20:47:44.144296 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 20:47:44.149925 master-0 kubenswrapper[4038]: W0312 20:47:44.149781 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:44.150026 master-0 kubenswrapper[4038]: E0312 20:47:44.149947 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:44.232597 master-0 kubenswrapper[4038]: W0312 20:47:44.232491 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:44.232851 master-0 kubenswrapper[4038]: E0312 20:47:44.232603 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:44.312277 master-0 kubenswrapper[4038]: W0312 20:47:44.312063 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:44.312277 master-0 kubenswrapper[4038]: E0312 20:47:44.312202 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:44.435953 master-0 kubenswrapper[4038]: I0312 20:47:44.435858 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:44.437918 master-0 kubenswrapper[4038]: I0312 20:47:44.437861 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:44.438042 master-0 kubenswrapper[4038]: I0312 20:47:44.437928 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:44.438042 master-0 kubenswrapper[4038]: I0312 20:47:44.437951 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:44.438042 master-0 kubenswrapper[4038]: I0312 20:47:44.438033 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:44.439323 master-0 kubenswrapper[4038]: E0312 20:47:44.439257 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 20:47:44.729725 master-0 kubenswrapper[4038]: I0312 20:47:44.729650 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:44.739887 master-0 kubenswrapper[4038]: E0312 20:47:44.739723 4038 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c3307ffccbc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.726528096 +0000 UTC m=+0.762210029,LastTimestamp:2026-03-12 20:47:42.726528096 +0000 UTC m=+0.762210029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:44.765033 master-0 kubenswrapper[4038]: I0312 20:47:44.764977 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 20:47:44.766189 master-0 kubenswrapper[4038]: E0312 20:47:44.766161 4038 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:44.886902 master-0 kubenswrapper[4038]: I0312 20:47:44.886766 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b"} Mar 12 20:47:44.888007 master-0 kubenswrapper[4038]: I0312 20:47:44.887982 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e"} Mar 12 20:47:44.889020 master-0 kubenswrapper[4038]: I0312 20:47:44.888998 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7"} Mar 12 20:47:44.890032 master-0 kubenswrapper[4038]: I0312 20:47:44.890010 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42"} Mar 12 20:47:44.891630 master-0 kubenswrapper[4038]: I0312 20:47:44.891608 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52"} Mar 12 20:47:45.729854 master-0 kubenswrapper[4038]: I0312 20:47:45.729777 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:45.745820 master-0 kubenswrapper[4038]: E0312 20:47:45.745762 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 20:47:46.040237 master-0 kubenswrapper[4038]: I0312 20:47:46.039916 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:46.041479 master-0 kubenswrapper[4038]: I0312 20:47:46.041454 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:46.041575 master-0 kubenswrapper[4038]: I0312 20:47:46.041487 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:46.041575 master-0 kubenswrapper[4038]: I0312 20:47:46.041499 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:46.041575 master-0 kubenswrapper[4038]: I0312 20:47:46.041544 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:46.070656 master-0 kubenswrapper[4038]: E0312 20:47:46.070598 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 20:47:46.262190 master-0 kubenswrapper[4038]: W0312 20:47:46.261990 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:46.262190 master-0 kubenswrapper[4038]: E0312 20:47:46.262060 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:46.436096 master-0 kubenswrapper[4038]: W0312 20:47:46.436031 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:46.436096 master-0 kubenswrapper[4038]: E0312 20:47:46.436095 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:46.729666 master-0 kubenswrapper[4038]: I0312 20:47:46.729600 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:46.826623 master-0 kubenswrapper[4038]: W0312 20:47:46.826436 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:46.826623 master-0 kubenswrapper[4038]: E0312 20:47:46.826549 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:46.860921 master-0 kubenswrapper[4038]: W0312 20:47:46.860864 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:46.860973 master-0 kubenswrapper[4038]: E0312 20:47:46.860930 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:46.898007 master-0 kubenswrapper[4038]: I0312 20:47:46.897960 4038 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="5aa72aa1d101c59af48adafd81202e715494ce655baaeb5ca917a23de1012db8" exitCode=0 Mar 12 20:47:46.898075 master-0 kubenswrapper[4038]: I0312 20:47:46.898027 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"5aa72aa1d101c59af48adafd81202e715494ce655baaeb5ca917a23de1012db8"} Mar 12 20:47:46.898138 master-0 kubenswrapper[4038]: I0312 20:47:46.898108 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:46.899128 master-0 kubenswrapper[4038]: I0312 20:47:46.899101 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:46.899179 master-0 kubenswrapper[4038]: I0312 20:47:46.899136 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:46.899179 master-0 kubenswrapper[4038]: I0312 20:47:46.899146 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:47.730096 master-0 kubenswrapper[4038]: I0312 20:47:47.730023 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:47.904703 master-0 kubenswrapper[4038]: I0312 20:47:47.904641 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63"} Mar 12 20:47:47.904703 master-0 kubenswrapper[4038]: I0312 20:47:47.904691 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272"} Mar 12 20:47:47.905494 master-0 kubenswrapper[4038]: I0312 20:47:47.904762 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:47.905610 master-0 kubenswrapper[4038]: I0312 20:47:47.905584 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:47.905610 master-0 kubenswrapper[4038]: I0312 20:47:47.905610 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:47.905696 master-0 kubenswrapper[4038]: I0312 20:47:47.905619 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:47.906974 master-0 kubenswrapper[4038]: I0312 20:47:47.906951 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 12 20:47:47.907339 master-0 kubenswrapper[4038]: I0312 20:47:47.907313 4038 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6f7da77829071ea7a257d7da10cc7073704ef90adead46467e600c825c29d03b" exitCode=1 Mar 12 20:47:47.907390 master-0 kubenswrapper[4038]: I0312 20:47:47.907339 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6f7da77829071ea7a257d7da10cc7073704ef90adead46467e600c825c29d03b"} Mar 12 20:47:47.907435 master-0 kubenswrapper[4038]: I0312 20:47:47.907391 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:47.907937 master-0 kubenswrapper[4038]: I0312 20:47:47.907913 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:47.907937 master-0 kubenswrapper[4038]: I0312 20:47:47.907934 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:47.908032 master-0 kubenswrapper[4038]: I0312 20:47:47.907942 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:47.908133 master-0 kubenswrapper[4038]: I0312 20:47:47.908113 4038 scope.go:117] "RemoveContainer" containerID="6f7da77829071ea7a257d7da10cc7073704ef90adead46467e600c825c29d03b" Mar 12 20:47:48.729601 master-0 kubenswrapper[4038]: I0312 20:47:48.729425 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:48.912939 master-0 kubenswrapper[4038]: I0312 20:47:48.912867 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 20:47:48.913692 master-0 kubenswrapper[4038]: I0312 20:47:48.913415 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 12 20:47:48.913777 master-0 kubenswrapper[4038]: I0312 20:47:48.913740 4038 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="b081787d6b97efdb0eb46c37f91de3fd3cfa1b4ba222c6e1d991e763352c76bb" exitCode=1 Mar 12 20:47:48.913937 master-0 kubenswrapper[4038]: I0312 20:47:48.913901 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:48.914587 master-0 kubenswrapper[4038]: I0312 20:47:48.914553 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:48.914985 master-0 kubenswrapper[4038]: I0312 20:47:48.914948 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"b081787d6b97efdb0eb46c37f91de3fd3cfa1b4ba222c6e1d991e763352c76bb"} Mar 12 20:47:48.915065 master-0 kubenswrapper[4038]: I0312 20:47:48.915011 4038 scope.go:117] "RemoveContainer" containerID="6f7da77829071ea7a257d7da10cc7073704ef90adead46467e600c825c29d03b" Mar 12 20:47:48.915862 master-0 kubenswrapper[4038]: I0312 20:47:48.915783 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:48.915862 master-0 kubenswrapper[4038]: I0312 20:47:48.915839 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:48.915862 master-0 kubenswrapper[4038]: I0312 20:47:48.915857 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:48.916468 master-0 kubenswrapper[4038]: I0312 20:47:48.916363 4038 scope.go:117] "RemoveContainer" containerID="b081787d6b97efdb0eb46c37f91de3fd3cfa1b4ba222c6e1d991e763352c76bb" Mar 12 20:47:48.916752 master-0 kubenswrapper[4038]: E0312 20:47:48.916679 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 20:47:48.916969 master-0 kubenswrapper[4038]: I0312 20:47:48.916807 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:48.916969 master-0 kubenswrapper[4038]: I0312 20:47:48.916970 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:48.917112 master-0 kubenswrapper[4038]: I0312 20:47:48.916985 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:48.948207 master-0 kubenswrapper[4038]: E0312 20:47:48.948133 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 12 20:47:49.024572 master-0 kubenswrapper[4038]: I0312 20:47:49.024434 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 20:47:49.025941 master-0 kubenswrapper[4038]: E0312 20:47:49.025907 4038 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:49.271185 master-0 kubenswrapper[4038]: I0312 20:47:49.271081 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:49.272464 master-0 kubenswrapper[4038]: I0312 20:47:49.272417 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:49.272554 master-0 kubenswrapper[4038]: I0312 20:47:49.272478 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:49.272554 master-0 kubenswrapper[4038]: I0312 20:47:49.272496 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:49.272676 master-0 kubenswrapper[4038]: I0312 20:47:49.272566 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:49.273740 master-0 kubenswrapper[4038]: E0312 20:47:49.273685 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 12 20:47:49.730233 master-0 kubenswrapper[4038]: I0312 20:47:49.730126 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:49.916615 master-0 kubenswrapper[4038]: I0312 20:47:49.916529 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:49.917981 master-0 kubenswrapper[4038]: I0312 20:47:49.917932 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:49.918166 master-0 kubenswrapper[4038]: I0312 20:47:49.918000 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:49.918166 master-0 kubenswrapper[4038]: I0312 20:47:49.918028 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:49.918639 master-0 kubenswrapper[4038]: I0312 20:47:49.918605 4038 scope.go:117] "RemoveContainer" containerID="b081787d6b97efdb0eb46c37f91de3fd3cfa1b4ba222c6e1d991e763352c76bb" Mar 12 20:47:49.918961 master-0 kubenswrapper[4038]: E0312 20:47:49.918919 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 20:47:50.427276 master-0 kubenswrapper[4038]: W0312 20:47:50.427145 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:50.427276 master-0 kubenswrapper[4038]: E0312 20:47:50.427274 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:50.730295 master-0 kubenswrapper[4038]: I0312 20:47:50.730134 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:50.907221 master-0 kubenswrapper[4038]: W0312 20:47:50.907110 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:50.907221 master-0 kubenswrapper[4038]: E0312 20:47:50.907209 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:51.248241 master-0 kubenswrapper[4038]: W0312 20:47:51.248142 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:51.248241 master-0 kubenswrapper[4038]: E0312 20:47:51.248227 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:51.729652 master-0 kubenswrapper[4038]: I0312 20:47:51.729585 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:51.802731 master-0 kubenswrapper[4038]: W0312 20:47:51.802560 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 20:47:51.802731 master-0 kubenswrapper[4038]: E0312 20:47:51.802649 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 12 20:47:51.925298 master-0 kubenswrapper[4038]: I0312 20:47:51.924325 4038 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="30bcb0d2fdcb56e224f2a443567cf3f56d89a253adb3d5c2682e4fce2aac1458" exitCode=0 Mar 12 20:47:51.925298 master-0 kubenswrapper[4038]: I0312 20:47:51.924423 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"30bcb0d2fdcb56e224f2a443567cf3f56d89a253adb3d5c2682e4fce2aac1458"} Mar 12 20:47:51.925298 master-0 kubenswrapper[4038]: I0312 20:47:51.924562 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:51.926743 master-0 kubenswrapper[4038]: I0312 20:47:51.926021 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:51.926743 master-0 kubenswrapper[4038]: I0312 20:47:51.926064 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:51.926743 master-0 kubenswrapper[4038]: I0312 20:47:51.926082 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:51.928901 master-0 kubenswrapper[4038]: I0312 20:47:51.928679 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 20:47:51.931034 master-0 kubenswrapper[4038]: I0312 20:47:51.930450 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:51.931409 master-0 kubenswrapper[4038]: I0312 20:47:51.931385 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:51.931738 master-0 kubenswrapper[4038]: I0312 20:47:51.931719 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:51.932009 master-0 kubenswrapper[4038]: I0312 20:47:51.931908 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:51.933548 master-0 kubenswrapper[4038]: I0312 20:47:51.932773 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"dc7d8b29ebb567785e771d22b9996a6a97141570cdafc6702bfef40b35ac45e8"} Mar 12 20:47:51.933548 master-0 kubenswrapper[4038]: I0312 20:47:51.932884 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:51.934531 master-0 kubenswrapper[4038]: I0312 20:47:51.933877 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:51.934531 master-0 kubenswrapper[4038]: I0312 20:47:51.933913 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:51.934531 master-0 kubenswrapper[4038]: I0312 20:47:51.933929 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:51.939162 master-0 kubenswrapper[4038]: I0312 20:47:51.939056 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3"} Mar 12 20:47:52.925413 master-0 kubenswrapper[4038]: E0312 20:47:52.925324 4038 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 20:47:52.944595 master-0 kubenswrapper[4038]: I0312 20:47:52.944521 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"0c4f41c6272feddd07ae16e6e9ba5929d190e5949f49ce16a888e464f3277bb3"} Mar 12 20:47:52.951573 master-0 kubenswrapper[4038]: I0312 20:47:52.951517 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3"} Mar 12 20:47:52.951688 master-0 kubenswrapper[4038]: I0312 20:47:52.951525 4038 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3" exitCode=1 Mar 12 20:47:52.951733 master-0 kubenswrapper[4038]: I0312 20:47:52.951685 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:52.952897 master-0 kubenswrapper[4038]: I0312 20:47:52.952451 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:52.952897 master-0 kubenswrapper[4038]: I0312 20:47:52.952476 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:52.952897 master-0 kubenswrapper[4038]: I0312 20:47:52.952489 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:53.850129 master-0 kubenswrapper[4038]: I0312 20:47:53.850073 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:54.733881 master-0 kubenswrapper[4038]: I0312 20:47:54.733829 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:54.745120 master-0 kubenswrapper[4038]: E0312 20:47:54.744996 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c3307ffccbc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.726528096 +0000 UTC m=+0.762210029,LastTimestamp:2026-03-12 20:47:42.726528096 +0000 UTC m=+0.762210029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.749439 master-0 kubenswrapper[4038]: E0312 20:47:54.749350 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.754710 master-0 kubenswrapper[4038]: E0312 20:47:54.754173 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.759132 master-0 kubenswrapper[4038]: E0312 20:47:54.758984 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.764383 master-0 kubenswrapper[4038]: E0312 20:47:54.764256 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080be87ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.929673383 +0000 UTC m=+0.965355236,LastTimestamp:2026-03-12 20:47:42.929673383 +0000 UTC m=+0.965355236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.769768 master-0 kubenswrapper[4038]: E0312 20:47:54.769707 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.980854183 +0000 UTC m=+1.016536086,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.774857 master-0 kubenswrapper[4038]: E0312 20:47:54.774683 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.980884721 +0000 UTC m=+1.016566624,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.779565 master-0 kubenswrapper[4038]: E0312 20:47:54.779416 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804170fb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.98090029 +0000 UTC m=+1.016582183,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.784090 master-0 kubenswrapper[4038]: E0312 20:47:54.783926 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.982093188 +0000 UTC m=+1.017775091,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.788351 master-0 kubenswrapper[4038]: E0312 20:47:54.788264 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.982128636 +0000 UTC m=+1.017810539,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.791844 master-0 kubenswrapper[4038]: E0312 20:47:54.791742 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804170fb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.982144605 +0000 UTC m=+1.017826508,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.795798 master-0 kubenswrapper[4038]: E0312 20:47:54.795689 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.982344726 +0000 UTC m=+1.018026589,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.799537 master-0 kubenswrapper[4038]: E0312 20:47:54.799429 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.982465809 +0000 UTC m=+1.018147702,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.803073 master-0 kubenswrapper[4038]: E0312 20:47:54.802990 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804170fb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.982494908 +0000 UTC m=+1.018176811,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.806524 master-0 kubenswrapper[4038]: E0312 20:47:54.806381 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.983317945 +0000 UTC m=+1.018999848,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.810426 master-0 kubenswrapper[4038]: E0312 20:47:54.810300 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.983356463 +0000 UTC m=+1.019038366,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.814231 master-0 kubenswrapper[4038]: E0312 20:47:54.814054 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804170fb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.983377432 +0000 UTC m=+1.019059335,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.818319 master-0 kubenswrapper[4038]: E0312 20:47:54.818248 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.983925394 +0000 UTC m=+1.019607257,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.821720 master-0 kubenswrapper[4038]: E0312 20:47:54.821638 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.983949923 +0000 UTC m=+1.019631796,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.826050 master-0 kubenswrapper[4038]: E0312 20:47:54.825903 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804170fb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.983960302 +0000 UTC m=+1.019642165,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.829728 master-0 kubenswrapper[4038]: E0312 20:47:54.829627 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.984526483 +0000 UTC m=+1.020208376,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.851847 master-0 kubenswrapper[4038]: E0312 20:47:54.833922 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.984546381 +0000 UTC m=+1.020228274,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.895458 master-0 kubenswrapper[4038]: E0312 20:47:54.895229 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804170fb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804170fb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798507958 +0000 UTC m=+0.834189811,LastTimestamp:2026-03-12 20:47:42.984562131 +0000 UTC m=+1.020244024,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.913839 master-0 kubenswrapper[4038]: E0312 20:47:54.912855 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c330804169764\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c330804169764 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798477156 +0000 UTC m=+0.834159019,LastTimestamp:2026-03-12 20:47:42.984641297 +0000 UTC m=+1.020323200,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.926857 master-0 kubenswrapper[4038]: E0312 20:47:54.923265 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c33080416ed6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c33080416ed6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:42.798499178 +0000 UTC m=+0.834181041,LastTimestamp:2026-03-12 20:47:42.984668095 +0000 UTC m=+1.020349998,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.934106 master-0 kubenswrapper[4038]: E0312 20:47:54.931091 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c33084eb87907 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:44.050600199 +0000 UTC m=+2.086282102,LastTimestamp:2026-03-12 20:47:44.050600199 +0000 UTC m=+2.086282102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.935947 master-0 kubenswrapper[4038]: E0312 20:47:54.935748 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c33084eb9be3f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:44.050683455 +0000 UTC m=+2.086365358,LastTimestamp:2026-03-12 20:47:44.050683455 +0000 UTC m=+2.086365358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.940351 master-0 kubenswrapper[4038]: E0312 20:47:54.940240 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c33084fc5efa3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:44.068259747 +0000 UTC m=+2.103941640,LastTimestamp:2026-03-12 20:47:44.068259747 +0000 UTC m=+2.103941640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.947915 master-0 kubenswrapper[4038]: E0312 20:47:54.947791 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c33085124d9df openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:44.091257311 +0000 UTC m=+2.126939204,LastTimestamp:2026-03-12 20:47:44.091257311 +0000 UTC m=+2.126939204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.954346 master-0 kubenswrapper[4038]: E0312 20:47:54.954208 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c33085383878d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:44.131016589 +0000 UTC m=+2.166698492,LastTimestamp:2026-03-12 20:47:44.131016589 +0000 UTC m=+2.166698492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.969840 master-0 kubenswrapper[4038]: E0312 20:47:54.968145 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3308b4a9a1c9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 1.669s (1.669s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:45.760903625 +0000 UTC m=+3.796585488,LastTimestamp:2026-03-12 20:47:45.760903625 +0000 UTC m=+3.796585488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.983304 master-0 kubenswrapper[4038]: E0312 20:47:54.983129 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3308c1e92222 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:45.983169058 +0000 UTC m=+4.018850931,LastTimestamp:2026-03-12 20:47:45.983169058 +0000 UTC m=+4.018850931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:54.994658 master-0 kubenswrapper[4038]: E0312 20:47:54.993684 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3308c2eb847c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:46.000102524 +0000 UTC m=+4.035784417,LastTimestamp:2026-03-12 20:47:46.000102524 +0000 UTC m=+4.035784417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.000986 master-0 kubenswrapper[4038]: E0312 20:47:55.000749 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c3308f503420e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 2.772s (2.772s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:46.840519182 +0000 UTC m=+4.876201045,LastTimestamp:2026-03-12 20:47:46.840519182 +0000 UTC m=+4.876201045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.006455 master-0 kubenswrapper[4038]: E0312 20:47:55.006334 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3308f8bc6fb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:46.902986681 +0000 UTC m=+4.938668544,LastTimestamp:2026-03-12 20:47:46.902986681 +0000 UTC m=+4.938668544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.010152 master-0 kubenswrapper[4038]: E0312 20:47:55.010061 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c330901b7bf21 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.053674273 +0000 UTC m=+5.089356136,LastTimestamp:2026-03-12 20:47:47.053674273 +0000 UTC m=+5.089356136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.014535 master-0 kubenswrapper[4038]: E0312 20:47:55.014447 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3309026f4deb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.065703915 +0000 UTC m=+5.101385778,LastTimestamp:2026-03-12 20:47:47.065703915 +0000 UTC m=+5.101385778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.019632 master-0 kubenswrapper[4038]: E0312 20:47:55.019478 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c3309028d0af4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.067652852 +0000 UTC m=+5.103334715,LastTimestamp:2026-03-12 20:47:47.067652852 +0000 UTC m=+5.103334715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.025031 master-0 kubenswrapper[4038]: E0312 20:47:55.024833 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c330902d14b21 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.072125729 +0000 UTC m=+5.107807592,LastTimestamp:2026-03-12 20:47:47.072125729 +0000 UTC m=+5.107807592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.029710 master-0 kubenswrapper[4038]: E0312 20:47:55.029592 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c330903d287d0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.088984016 +0000 UTC m=+5.124665879,LastTimestamp:2026-03-12 20:47:47.088984016 +0000 UTC m=+5.124665879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.034005 master-0 kubenswrapper[4038]: E0312 20:47:55.033880 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c33090e5f47a6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.265980326 +0000 UTC m=+5.301662189,LastTimestamp:2026-03-12 20:47:47.265980326 +0000 UTC m=+5.301662189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.039921 master-0 kubenswrapper[4038]: E0312 20:47:55.039304 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c33090f3c2594 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.28045506 +0000 UTC m=+5.316136923,LastTimestamp:2026-03-12 20:47:47.28045506 +0000 UTC m=+5.316136923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.044569 master-0 kubenswrapper[4038]: E0312 20:47:55.044430 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c3308f8bc6fb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3308f8bc6fb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:46.902986681 +0000 UTC m=+4.938668544,LastTimestamp:2026-03-12 20:47:47.913204302 +0000 UTC m=+5.948886165,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.056567 master-0 kubenswrapper[4038]: E0312 20:47:55.056398 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c3309026f4deb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3309026f4deb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.065703915 +0000 UTC m=+5.101385778,LastTimestamp:2026-03-12 20:47:48.277081877 +0000 UTC m=+6.312763740,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.062529 master-0 kubenswrapper[4038]: E0312 20:47:55.061997 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c330903d287d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c330903d287d0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.088984016 +0000 UTC m=+5.124665879,LastTimestamp:2026-03-12 20:47:48.418566569 +0000 UTC m=+6.454248432,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.068126 master-0 kubenswrapper[4038]: E0312 20:47:55.067890 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c330970c1f59e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:48.916614558 +0000 UTC m=+6.952296431,LastTimestamp:2026-03-12 20:47:48.916614558 +0000 UTC m=+6.952296431,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.075655 master-0 kubenswrapper[4038]: E0312 20:47:55.075495 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c330970c1f59e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c330970c1f59e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:48.916614558 +0000 UTC m=+6.952296431,LastTimestamp:2026-03-12 20:47:49.918873807 +0000 UTC m=+7.954555710,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.081830 master-0 kubenswrapper[4038]: E0312 20:47:55.081728 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a0052726a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.194s (7.194s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.325225578 +0000 UTC m=+9.360907451,LastTimestamp:2026-03-12 20:47:51.325225578 +0000 UTC m=+9.360907451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.085573 master-0 kubenswrapper[4038]: E0312 20:47:55.085474 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330a02032c42 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.302s (7.302s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.353584706 +0000 UTC m=+9.389266579,LastTimestamp:2026-03-12 20:47:51.353584706 +0000 UTC m=+9.389266579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.095959 master-0 kubenswrapper[4038]: E0312 20:47:55.095716 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c330a025e5dbf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.308s (7.308s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.359561151 +0000 UTC m=+9.395243034,LastTimestamp:2026-03-12 20:47:51.359561151 +0000 UTC m=+9.395243034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.101247 master-0 kubenswrapper[4038]: E0312 20:47:55.101074 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a0f5dd38c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.57762958 +0000 UTC m=+9.613311473,LastTimestamp:2026-03-12 20:47:51.57762958 +0000 UTC m=+9.613311473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.106461 master-0 kubenswrapper[4038]: E0312 20:47:55.106120 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330a0f6f1212 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.578759698 +0000 UTC m=+9.614441561,LastTimestamp:2026-03-12 20:47:51.578759698 +0000 UTC m=+9.614441561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.111189 master-0 kubenswrapper[4038]: E0312 20:47:55.111094 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c330a0fc77736 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.584552758 +0000 UTC m=+9.620234651,LastTimestamp:2026-03-12 20:47:51.584552758 +0000 UTC m=+9.620234651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.116567 master-0 kubenswrapper[4038]: E0312 20:47:55.116462 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a103ce8be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.592249534 +0000 UTC m=+9.627931417,LastTimestamp:2026-03-12 20:47:51.592249534 +0000 UTC m=+9.627931417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.121348 master-0 kubenswrapper[4038]: E0312 20:47:55.121083 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330a1044f710 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.592777488 +0000 UTC m=+9.628459351,LastTimestamp:2026-03-12 20:47:51.592777488 +0000 UTC m=+9.628459351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.125045 master-0 kubenswrapper[4038]: E0312 20:47:55.124956 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330a1055af8d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.593873293 +0000 UTC m=+9.629555196,LastTimestamp:2026-03-12 20:47:51.593873293 +0000 UTC m=+9.629555196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.131042 master-0 kubenswrapper[4038]: E0312 20:47:55.130731 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c330a107d00c2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.596449986 +0000 UTC m=+9.632131879,LastTimestamp:2026-03-12 20:47:51.596449986 +0000 UTC m=+9.632131879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.136037 master-0 kubenswrapper[4038]: E0312 20:47:55.135876 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a24644c12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.930375186 +0000 UTC m=+9.966057079,LastTimestamp:2026-03-12 20:47:51.930375186 +0000 UTC m=+9.966057079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.142217 master-0 kubenswrapper[4038]: E0312 20:47:55.142044 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a35657ee6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:52.215666406 +0000 UTC m=+10.251348299,LastTimestamp:2026-03-12 20:47:52.215666406 +0000 UTC m=+10.251348299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.147204 master-0 kubenswrapper[4038]: E0312 20:47:55.147049 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a364ed56e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:52.230958446 +0000 UTC m=+10.266640309,LastTimestamp:2026-03-12 20:47:52.230958446 +0000 UTC m=+10.266640309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.151743 master-0 kubenswrapper[4038]: E0312 20:47:55.151149 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330a3660c9a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:52.232135074 +0000 UTC m=+10.267816977,LastTimestamp:2026-03-12 20:47:52.232135074 +0000 UTC m=+10.267816977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.157087 master-0 kubenswrapper[4038]: E0312 20:47:55.156955 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330adeb41024 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 3.462s (3.462s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.0561649 +0000 UTC m=+13.091846783,LastTimestamp:2026-03-12 20:47:55.0561649 +0000 UTC m=+13.091846783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.161088 master-0 kubenswrapper[4038]: E0312 20:47:55.161000 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330adf0001c9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 2.828s (2.828s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.061141961 +0000 UTC m=+13.096823824,LastTimestamp:2026-03-12 20:47:55.061141961 +0000 UTC m=+13.096823824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.286187 master-0 kubenswrapper[4038]: E0312 20:47:55.286044 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330aec0c9764 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.2800705 +0000 UTC m=+13.315752363,LastTimestamp:2026-03-12 20:47:55.2800705 +0000 UTC m=+13.315752363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.291838 master-0 kubenswrapper[4038]: E0312 20:47:55.291727 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330aec11c0d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.280408788 +0000 UTC m=+13.316090651,LastTimestamp:2026-03-12 20:47:55.280408788 +0000 UTC m=+13.316090651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.296959 master-0 kubenswrapper[4038]: E0312 20:47:55.296881 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330aec9f9022 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.289702434 +0000 UTC m=+13.325384297,LastTimestamp:2026-03-12 20:47:55.289702434 +0000 UTC m=+13.325384297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.301765 master-0 kubenswrapper[4038]: E0312 20:47:55.301674 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c330aecc56eb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.292184245 +0000 UTC m=+13.327866118,LastTimestamp:2026-03-12 20:47:55.292184245 +0000 UTC m=+13.327866118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:55.356356 master-0 kubenswrapper[4038]: E0312 20:47:55.356261 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 20:47:55.674457 master-0 kubenswrapper[4038]: I0312 20:47:55.674348 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:55.676174 master-0 kubenswrapper[4038]: I0312 20:47:55.676108 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:55.676223 master-0 kubenswrapper[4038]: I0312 20:47:55.676194 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:55.676223 master-0 kubenswrapper[4038]: I0312 20:47:55.676213 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:55.676341 master-0 kubenswrapper[4038]: I0312 20:47:55.676307 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:47:55.685590 master-0 kubenswrapper[4038]: E0312 20:47:55.685513 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 20:47:55.737529 master-0 kubenswrapper[4038]: I0312 20:47:55.737430 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:55.965387 master-0 kubenswrapper[4038]: I0312 20:47:55.965154 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c"} Mar 12 20:47:55.965387 master-0 kubenswrapper[4038]: I0312 20:47:55.965224 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:55.967103 master-0 kubenswrapper[4038]: I0312 20:47:55.966481 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:55.967103 master-0 kubenswrapper[4038]: I0312 20:47:55.966540 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:55.967103 master-0 kubenswrapper[4038]: I0312 20:47:55.966561 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:55.967103 master-0 kubenswrapper[4038]: I0312 20:47:55.967063 4038 scope.go:117] "RemoveContainer" containerID="75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3" Mar 12 20:47:55.970258 master-0 kubenswrapper[4038]: I0312 20:47:55.970173 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"293b592a6aebbbbed58da86d9dee8f9df9bbf7c626aca82c95e65d3a571789d2"} Mar 12 20:47:55.970434 master-0 kubenswrapper[4038]: I0312 20:47:55.970386 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:55.971571 master-0 kubenswrapper[4038]: I0312 20:47:55.971524 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:55.971718 master-0 kubenswrapper[4038]: I0312 20:47:55.971587 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:55.971718 master-0 kubenswrapper[4038]: I0312 20:47:55.971611 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:55.981911 master-0 kubenswrapper[4038]: E0312 20:47:55.981586 4038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330b155e408a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:55.973288074 +0000 UTC m=+14.008969927,LastTimestamp:2026-03-12 20:47:55.973288074 +0000 UTC m=+14.008969927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:56.266865 master-0 kubenswrapper[4038]: E0312 20:47:56.266577 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c330a0f6f1212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330a0f6f1212 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.578759698 +0000 UTC m=+9.614441561,LastTimestamp:2026-03-12 20:47:56.258524572 +0000 UTC m=+14.294206475,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:56.285066 master-0 kubenswrapper[4038]: E0312 20:47:56.284852 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c330a1044f710\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c330a1044f710 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:51.592777488 +0000 UTC m=+9.628459351,LastTimestamp:2026-03-12 20:47:56.275636417 +0000 UTC m=+14.311318310,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:47:56.737201 master-0 kubenswrapper[4038]: I0312 20:47:56.737137 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:56.977645 master-0 kubenswrapper[4038]: I0312 20:47:56.977548 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:56.978741 master-0 kubenswrapper[4038]: I0312 20:47:56.978413 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:56.979029 master-0 kubenswrapper[4038]: I0312 20:47:56.978942 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf"} Mar 12 20:47:56.979497 master-0 kubenswrapper[4038]: I0312 20:47:56.979415 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:56.979497 master-0 kubenswrapper[4038]: I0312 20:47:56.979474 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:56.979497 master-0 kubenswrapper[4038]: I0312 20:47:56.979497 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:56.981244 master-0 kubenswrapper[4038]: I0312 20:47:56.981158 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:56.981244 master-0 kubenswrapper[4038]: I0312 20:47:56.981205 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:56.981244 master-0 kubenswrapper[4038]: I0312 20:47:56.981225 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:57.638885 master-0 kubenswrapper[4038]: I0312 20:47:57.638738 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 20:47:57.664714 master-0 kubenswrapper[4038]: I0312 20:47:57.664634 4038 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 20:47:57.737321 master-0 kubenswrapper[4038]: I0312 20:47:57.737225 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:57.980955 master-0 kubenswrapper[4038]: I0312 20:47:57.980713 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:57.982206 master-0 kubenswrapper[4038]: I0312 20:47:57.982155 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:57.982356 master-0 kubenswrapper[4038]: I0312 20:47:57.982230 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:57.982356 master-0 kubenswrapper[4038]: I0312 20:47:57.982253 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:58.738085 master-0 kubenswrapper[4038]: I0312 20:47:58.738008 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:59.165421 master-0 kubenswrapper[4038]: I0312 20:47:59.165303 4038 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:59.166354 master-0 kubenswrapper[4038]: I0312 20:47:59.165544 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:59.167686 master-0 kubenswrapper[4038]: I0312 20:47:59.167636 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:59.167686 master-0 kubenswrapper[4038]: I0312 20:47:59.167691 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:59.167939 master-0 kubenswrapper[4038]: I0312 20:47:59.167710 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:59.173401 master-0 kubenswrapper[4038]: I0312 20:47:59.173329 4038 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:59.220458 master-0 kubenswrapper[4038]: I0312 20:47:59.220315 4038 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:59.220718 master-0 kubenswrapper[4038]: I0312 20:47:59.220578 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:59.222349 master-0 kubenswrapper[4038]: I0312 20:47:59.222305 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:59.222468 master-0 kubenswrapper[4038]: I0312 20:47:59.222365 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:59.222468 master-0 kubenswrapper[4038]: I0312 20:47:59.222384 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:59.227574 master-0 kubenswrapper[4038]: I0312 20:47:59.227520 4038 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:59.431986 master-0 kubenswrapper[4038]: I0312 20:47:59.431703 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:59.440079 master-0 kubenswrapper[4038]: I0312 20:47:59.439994 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:47:59.445014 master-0 kubenswrapper[4038]: W0312 20:47:59.444930 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 20:47:59.445210 master-0 kubenswrapper[4038]: E0312 20:47:59.445029 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 20:47:59.737584 master-0 kubenswrapper[4038]: I0312 20:47:59.737342 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:47:59.985850 master-0 kubenswrapper[4038]: I0312 20:47:59.985762 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:59.986155 master-0 kubenswrapper[4038]: I0312 20:47:59.985891 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:47:59.986601 master-0 kubenswrapper[4038]: I0312 20:47:59.986211 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:47:59.988391 master-0 kubenswrapper[4038]: I0312 20:47:59.988251 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:59.988391 master-0 kubenswrapper[4038]: I0312 20:47:59.988319 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:59.988391 master-0 kubenswrapper[4038]: I0312 20:47:59.988246 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:47:59.988644 master-0 kubenswrapper[4038]: I0312 20:47:59.988403 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:47:59.988644 master-0 kubenswrapper[4038]: I0312 20:47:59.988429 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:47:59.988772 master-0 kubenswrapper[4038]: I0312 20:47:59.988339 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:00.737995 master-0 kubenswrapper[4038]: I0312 20:48:00.737921 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:00.988697 master-0 kubenswrapper[4038]: I0312 20:48:00.988531 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:00.989507 master-0 kubenswrapper[4038]: I0312 20:48:00.989315 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:00.989728 master-0 kubenswrapper[4038]: I0312 20:48:00.989689 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:00.990157 master-0 kubenswrapper[4038]: I0312 20:48:00.989762 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:00.990157 master-0 kubenswrapper[4038]: I0312 20:48:00.989799 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:00.990799 master-0 kubenswrapper[4038]: I0312 20:48:00.990743 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:00.990898 master-0 kubenswrapper[4038]: I0312 20:48:00.990870 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:00.990941 master-0 kubenswrapper[4038]: I0312 20:48:00.990919 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:01.738104 master-0 kubenswrapper[4038]: I0312 20:48:01.737888 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:02.049143 master-0 kubenswrapper[4038]: I0312 20:48:02.049049 4038 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:48:02.049479 master-0 kubenswrapper[4038]: I0312 20:48:02.049306 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:02.050523 master-0 kubenswrapper[4038]: I0312 20:48:02.050486 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:02.050523 master-0 kubenswrapper[4038]: I0312 20:48:02.050522 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:02.050684 master-0 kubenswrapper[4038]: I0312 20:48:02.050533 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:02.054851 master-0 kubenswrapper[4038]: I0312 20:48:02.054777 4038 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:48:02.366399 master-0 kubenswrapper[4038]: E0312 20:48:02.366285 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 20:48:02.409869 master-0 kubenswrapper[4038]: I0312 20:48:02.409757 4038 csr.go:261] certificate signing request csr-bth2w is approved, waiting to be issued Mar 12 20:48:02.686603 master-0 kubenswrapper[4038]: I0312 20:48:02.686379 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:02.688343 master-0 kubenswrapper[4038]: I0312 20:48:02.688297 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:02.688468 master-0 kubenswrapper[4038]: I0312 20:48:02.688358 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:02.688468 master-0 kubenswrapper[4038]: I0312 20:48:02.688375 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:02.688468 master-0 kubenswrapper[4038]: I0312 20:48:02.688458 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:48:02.696489 master-0 kubenswrapper[4038]: E0312 20:48:02.696408 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 20:48:02.740911 master-0 kubenswrapper[4038]: I0312 20:48:02.739278 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:02.926293 master-0 kubenswrapper[4038]: E0312 20:48:02.926176 4038 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 20:48:03.014593 master-0 kubenswrapper[4038]: I0312 20:48:03.014352 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:03.014593 master-0 kubenswrapper[4038]: I0312 20:48:03.014470 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:48:03.016031 master-0 kubenswrapper[4038]: I0312 20:48:03.015963 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:03.016147 master-0 kubenswrapper[4038]: I0312 20:48:03.016035 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:03.016147 master-0 kubenswrapper[4038]: I0312 20:48:03.016058 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:03.202304 master-0 kubenswrapper[4038]: W0312 20:48:03.202212 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 20:48:03.202304 master-0 kubenswrapper[4038]: E0312 20:48:03.202290 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 20:48:03.737309 master-0 kubenswrapper[4038]: I0312 20:48:03.737228 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:03.760926 master-0 kubenswrapper[4038]: W0312 20:48:03.760782 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 20:48:03.760926 master-0 kubenswrapper[4038]: E0312 20:48:03.760907 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 20:48:03.810712 master-0 kubenswrapper[4038]: W0312 20:48:03.810605 4038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:03.810712 master-0 kubenswrapper[4038]: E0312 20:48:03.810702 4038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 20:48:03.879971 master-0 kubenswrapper[4038]: I0312 20:48:03.879867 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:03.881714 master-0 kubenswrapper[4038]: I0312 20:48:03.881643 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:03.881844 master-0 kubenswrapper[4038]: I0312 20:48:03.881730 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:03.881844 master-0 kubenswrapper[4038]: I0312 20:48:03.881754 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:03.882438 master-0 kubenswrapper[4038]: I0312 20:48:03.882385 4038 scope.go:117] "RemoveContainer" containerID="b081787d6b97efdb0eb46c37f91de3fd3cfa1b4ba222c6e1d991e763352c76bb" Mar 12 20:48:03.898296 master-0 kubenswrapper[4038]: E0312 20:48:03.898077 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c3308f8bc6fb9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3308f8bc6fb9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:46.902986681 +0000 UTC m=+4.938668544,LastTimestamp:2026-03-12 20:48:03.886525914 +0000 UTC m=+21.922207817,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:48:04.017065 master-0 kubenswrapper[4038]: I0312 20:48:04.016840 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:04.017928 master-0 kubenswrapper[4038]: I0312 20:48:04.017893 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:04.017928 master-0 kubenswrapper[4038]: I0312 20:48:04.017925 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:04.018046 master-0 kubenswrapper[4038]: I0312 20:48:04.017936 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:04.155078 master-0 kubenswrapper[4038]: E0312 20:48:04.154907 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c3309026f4deb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c3309026f4deb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.065703915 +0000 UTC m=+5.101385778,LastTimestamp:2026-03-12 20:48:04.149747248 +0000 UTC m=+22.185429111,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:48:04.173365 master-0 kubenswrapper[4038]: E0312 20:48:04.173178 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c330903d287d0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c330903d287d0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:47.088984016 +0000 UTC m=+5.124665879,LastTimestamp:2026-03-12 20:48:04.16506388 +0000 UTC m=+22.200745783,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:48:04.735521 master-0 kubenswrapper[4038]: I0312 20:48:04.735446 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:05.022901 master-0 kubenswrapper[4038]: I0312 20:48:05.022687 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 20:48:05.023990 master-0 kubenswrapper[4038]: I0312 20:48:05.023418 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 12 20:48:05.024322 master-0 kubenswrapper[4038]: I0312 20:48:05.024235 4038 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e" exitCode=1 Mar 12 20:48:05.024400 master-0 kubenswrapper[4038]: I0312 20:48:05.024313 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e"} Mar 12 20:48:05.024400 master-0 kubenswrapper[4038]: I0312 20:48:05.024374 4038 scope.go:117] "RemoveContainer" containerID="b081787d6b97efdb0eb46c37f91de3fd3cfa1b4ba222c6e1d991e763352c76bb" Mar 12 20:48:05.024570 master-0 kubenswrapper[4038]: I0312 20:48:05.024522 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:05.025536 master-0 kubenswrapper[4038]: I0312 20:48:05.025479 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:05.025536 master-0 kubenswrapper[4038]: I0312 20:48:05.025540 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:05.025731 master-0 kubenswrapper[4038]: I0312 20:48:05.025558 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:05.026761 master-0 kubenswrapper[4038]: I0312 20:48:05.026189 4038 scope.go:117] "RemoveContainer" containerID="faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e" Mar 12 20:48:05.026761 master-0 kubenswrapper[4038]: E0312 20:48:05.026441 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 20:48:05.035722 master-0 kubenswrapper[4038]: E0312 20:48:05.035521 4038 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c330970c1f59e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c330970c1f59e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:47:48.916614558 +0000 UTC m=+6.952296431,LastTimestamp:2026-03-12 20:48:05.02638788 +0000 UTC m=+23.062069783,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:48:05.737470 master-0 kubenswrapper[4038]: I0312 20:48:05.737376 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:06.029870 master-0 kubenswrapper[4038]: I0312 20:48:06.029680 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 20:48:06.738575 master-0 kubenswrapper[4038]: I0312 20:48:06.738490 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:07.735989 master-0 kubenswrapper[4038]: I0312 20:48:07.735796 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:08.738504 master-0 kubenswrapper[4038]: I0312 20:48:08.738357 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:09.375414 master-0 kubenswrapper[4038]: E0312 20:48:09.375293 4038 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 20:48:09.697857 master-0 kubenswrapper[4038]: I0312 20:48:09.697592 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:09.699473 master-0 kubenswrapper[4038]: I0312 20:48:09.699389 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:09.699626 master-0 kubenswrapper[4038]: I0312 20:48:09.699485 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:09.699626 master-0 kubenswrapper[4038]: I0312 20:48:09.699518 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:09.699626 master-0 kubenswrapper[4038]: I0312 20:48:09.699613 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:48:09.708449 master-0 kubenswrapper[4038]: E0312 20:48:09.708381 4038 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 12 20:48:09.737200 master-0 kubenswrapper[4038]: I0312 20:48:09.737110 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:10.738672 master-0 kubenswrapper[4038]: I0312 20:48:10.738537 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:11.736707 master-0 kubenswrapper[4038]: I0312 20:48:11.736637 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:12.735658 master-0 kubenswrapper[4038]: I0312 20:48:12.735566 4038 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 20:48:12.927353 master-0 kubenswrapper[4038]: E0312 20:48:12.927114 4038 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 20:48:13.147081 master-0 kubenswrapper[4038]: I0312 20:48:13.146964 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:48:13.147415 master-0 kubenswrapper[4038]: I0312 20:48:13.147314 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:13.151603 master-0 kubenswrapper[4038]: I0312 20:48:13.151543 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:13.151847 master-0 kubenswrapper[4038]: I0312 20:48:13.151770 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:13.152036 master-0 kubenswrapper[4038]: I0312 20:48:13.152012 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:13.154476 master-0 kubenswrapper[4038]: I0312 20:48:13.154405 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:48:13.636843 master-0 kubenswrapper[4038]: I0312 20:48:13.636742 4038 csr.go:257] certificate signing request csr-bth2w is issued Mar 12 20:48:13.741896 master-0 kubenswrapper[4038]: I0312 20:48:13.741830 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:13.758087 master-0 kubenswrapper[4038]: I0312 20:48:13.758031 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:13.822526 master-0 kubenswrapper[4038]: I0312 20:48:13.822444 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.053729 master-0 kubenswrapper[4038]: I0312 20:48:14.053615 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:14.054358 master-0 kubenswrapper[4038]: I0312 20:48:14.054336 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:14.054422 master-0 kubenswrapper[4038]: I0312 20:48:14.054369 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:14.054422 master-0 kubenswrapper[4038]: I0312 20:48:14.054378 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:14.098007 master-0 kubenswrapper[4038]: I0312 20:48:14.097944 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.098007 master-0 kubenswrapper[4038]: E0312 20:48:14.097994 4038 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 20:48:14.128944 master-0 kubenswrapper[4038]: I0312 20:48:14.128801 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.152465 master-0 kubenswrapper[4038]: I0312 20:48:14.152411 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.210987 master-0 kubenswrapper[4038]: I0312 20:48:14.210913 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.493494 master-0 kubenswrapper[4038]: I0312 20:48:14.493432 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.493494 master-0 kubenswrapper[4038]: E0312 20:48:14.493477 4038 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 20:48:14.597640 master-0 kubenswrapper[4038]: I0312 20:48:14.597574 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.616328 master-0 kubenswrapper[4038]: I0312 20:48:14.616078 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.616328 master-0 kubenswrapper[4038]: I0312 20:48:14.616092 4038 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 20:48:14.638580 master-0 kubenswrapper[4038]: I0312 20:48:14.638501 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 13:56:05.662351473 +0000 UTC Mar 12 20:48:14.638580 master-0 kubenswrapper[4038]: I0312 20:48:14.638559 4038 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h7m51.023796585s for next certificate rotation Mar 12 20:48:14.680289 master-0 kubenswrapper[4038]: I0312 20:48:14.680224 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.950484 master-0 kubenswrapper[4038]: I0312 20:48:14.950408 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:14.950484 master-0 kubenswrapper[4038]: E0312 20:48:14.950450 4038 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 20:48:15.535510 master-0 kubenswrapper[4038]: I0312 20:48:15.535414 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:15.551634 master-0 kubenswrapper[4038]: I0312 20:48:15.551557 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:15.610698 master-0 kubenswrapper[4038]: I0312 20:48:15.610607 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:15.872054 master-0 kubenswrapper[4038]: I0312 20:48:15.871994 4038 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 12 20:48:15.872054 master-0 kubenswrapper[4038]: E0312 20:48:15.872039 4038 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 12 20:48:16.385506 master-0 kubenswrapper[4038]: E0312 20:48:16.385374 4038 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 12 20:48:16.709716 master-0 kubenswrapper[4038]: I0312 20:48:16.709509 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:16.711791 master-0 kubenswrapper[4038]: I0312 20:48:16.711204 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:16.711791 master-0 kubenswrapper[4038]: I0312 20:48:16.711275 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:16.711791 master-0 kubenswrapper[4038]: I0312 20:48:16.711294 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:16.711791 master-0 kubenswrapper[4038]: I0312 20:48:16.711372 4038 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:48:16.723377 master-0 kubenswrapper[4038]: I0312 20:48:16.723319 4038 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 20:48:16.723377 master-0 kubenswrapper[4038]: E0312 20:48:16.723374 4038 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 12 20:48:16.739617 master-0 kubenswrapper[4038]: E0312 20:48:16.739503 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:16.755735 master-0 kubenswrapper[4038]: I0312 20:48:16.755625 4038 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 20:48:16.770434 master-0 kubenswrapper[4038]: I0312 20:48:16.770332 4038 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 20:48:16.839753 master-0 kubenswrapper[4038]: E0312 20:48:16.839662 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:16.940321 master-0 kubenswrapper[4038]: E0312 20:48:16.940227 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.041395 master-0 kubenswrapper[4038]: E0312 20:48:17.041195 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.141662 master-0 kubenswrapper[4038]: E0312 20:48:17.141567 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.242877 master-0 kubenswrapper[4038]: E0312 20:48:17.242718 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.343454 master-0 kubenswrapper[4038]: E0312 20:48:17.343336 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.444248 master-0 kubenswrapper[4038]: E0312 20:48:17.444098 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.545107 master-0 kubenswrapper[4038]: E0312 20:48:17.544979 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.645475 master-0 kubenswrapper[4038]: E0312 20:48:17.645282 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.746150 master-0 kubenswrapper[4038]: E0312 20:48:17.746050 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.846302 master-0 kubenswrapper[4038]: E0312 20:48:17.846190 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:17.879660 master-0 kubenswrapper[4038]: I0312 20:48:17.879575 4038 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:48:17.881407 master-0 kubenswrapper[4038]: I0312 20:48:17.881332 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:48:17.881590 master-0 kubenswrapper[4038]: I0312 20:48:17.881537 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:48:17.881641 master-0 kubenswrapper[4038]: I0312 20:48:17.881596 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:48:17.882326 master-0 kubenswrapper[4038]: I0312 20:48:17.882287 4038 scope.go:117] "RemoveContainer" containerID="faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e" Mar 12 20:48:17.882576 master-0 kubenswrapper[4038]: E0312 20:48:17.882533 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 12 20:48:17.947265 master-0 kubenswrapper[4038]: E0312 20:48:17.947042 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.048299 master-0 kubenswrapper[4038]: E0312 20:48:18.048168 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.148746 master-0 kubenswrapper[4038]: E0312 20:48:18.148596 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.249868 master-0 kubenswrapper[4038]: E0312 20:48:18.249637 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.350453 master-0 kubenswrapper[4038]: E0312 20:48:18.350357 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.451572 master-0 kubenswrapper[4038]: E0312 20:48:18.451448 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.552507 master-0 kubenswrapper[4038]: E0312 20:48:18.552293 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.653063 master-0 kubenswrapper[4038]: E0312 20:48:18.652945 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.754255 master-0 kubenswrapper[4038]: E0312 20:48:18.754154 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.855302 master-0 kubenswrapper[4038]: E0312 20:48:18.855198 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:18.956537 master-0 kubenswrapper[4038]: E0312 20:48:18.956422 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.057782 master-0 kubenswrapper[4038]: E0312 20:48:19.057643 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.158912 master-0 kubenswrapper[4038]: E0312 20:48:19.158697 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.259701 master-0 kubenswrapper[4038]: E0312 20:48:19.259578 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.360068 master-0 kubenswrapper[4038]: E0312 20:48:19.359968 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.460655 master-0 kubenswrapper[4038]: E0312 20:48:19.460497 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.561587 master-0 kubenswrapper[4038]: E0312 20:48:19.561513 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.661966 master-0 kubenswrapper[4038]: E0312 20:48:19.661791 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.762273 master-0 kubenswrapper[4038]: E0312 20:48:19.762069 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.863195 master-0 kubenswrapper[4038]: E0312 20:48:19.863090 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:19.963606 master-0 kubenswrapper[4038]: E0312 20:48:19.963512 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.064782 master-0 kubenswrapper[4038]: E0312 20:48:20.064497 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.165589 master-0 kubenswrapper[4038]: E0312 20:48:20.165467 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.266560 master-0 kubenswrapper[4038]: E0312 20:48:20.266469 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.367779 master-0 kubenswrapper[4038]: E0312 20:48:20.367600 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.468600 master-0 kubenswrapper[4038]: E0312 20:48:20.468521 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.569586 master-0 kubenswrapper[4038]: E0312 20:48:20.569458 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.669870 master-0 kubenswrapper[4038]: E0312 20:48:20.669620 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.770125 master-0 kubenswrapper[4038]: E0312 20:48:20.770014 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.871360 master-0 kubenswrapper[4038]: E0312 20:48:20.871204 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:20.971791 master-0 kubenswrapper[4038]: E0312 20:48:20.971551 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.071790 master-0 kubenswrapper[4038]: E0312 20:48:21.071703 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.172025 master-0 kubenswrapper[4038]: E0312 20:48:21.171895 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.272626 master-0 kubenswrapper[4038]: E0312 20:48:21.272422 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.373360 master-0 kubenswrapper[4038]: E0312 20:48:21.373282 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.474052 master-0 kubenswrapper[4038]: E0312 20:48:21.473950 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.574843 master-0 kubenswrapper[4038]: E0312 20:48:21.574738 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.675518 master-0 kubenswrapper[4038]: E0312 20:48:21.675410 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.776669 master-0 kubenswrapper[4038]: E0312 20:48:21.776548 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.877611 master-0 kubenswrapper[4038]: E0312 20:48:21.877432 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:21.978243 master-0 kubenswrapper[4038]: E0312 20:48:21.978151 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.078701 master-0 kubenswrapper[4038]: E0312 20:48:22.078576 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.179842 master-0 kubenswrapper[4038]: E0312 20:48:22.179662 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.280720 master-0 kubenswrapper[4038]: E0312 20:48:22.280629 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.381415 master-0 kubenswrapper[4038]: E0312 20:48:22.381334 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.482426 master-0 kubenswrapper[4038]: E0312 20:48:22.482205 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.583317 master-0 kubenswrapper[4038]: E0312 20:48:22.583193 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.684218 master-0 kubenswrapper[4038]: E0312 20:48:22.684106 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.708039 master-0 kubenswrapper[4038]: I0312 20:48:22.707941 4038 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 20:48:22.784992 master-0 kubenswrapper[4038]: E0312 20:48:22.784785 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.886273 master-0 kubenswrapper[4038]: E0312 20:48:22.886171 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:22.927456 master-0 kubenswrapper[4038]: E0312 20:48:22.927386 4038 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 20:48:22.986638 master-0 kubenswrapper[4038]: E0312 20:48:22.986564 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:23.087733 master-0 kubenswrapper[4038]: E0312 20:48:23.087650 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:23.103619 master-0 kubenswrapper[4038]: I0312 20:48:23.103574 4038 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 20:48:23.188387 master-0 kubenswrapper[4038]: E0312 20:48:23.188311 4038 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 20:48:23.217686 master-0 kubenswrapper[4038]: I0312 20:48:23.217610 4038 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 20:48:23.310651 master-0 kubenswrapper[4038]: I0312 20:48:23.310589 4038 csr.go:261] certificate signing request csr-dnmtn is approved, waiting to be issued Mar 12 20:48:23.320537 master-0 kubenswrapper[4038]: I0312 20:48:23.320295 4038 csr.go:257] certificate signing request csr-dnmtn is issued Mar 12 20:48:23.736571 master-0 kubenswrapper[4038]: I0312 20:48:23.736234 4038 apiserver.go:52] "Watching apiserver" Mar 12 20:48:23.743668 master-0 kubenswrapper[4038]: I0312 20:48:23.743612 4038 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 20:48:23.744053 master-0 kubenswrapper[4038]: I0312 20:48:23.743974 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-jffs8","openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl","openshift-network-operator/network-operator-7c649bf6d4-62t2f"] Mar 12 20:48:23.744346 master-0 kubenswrapper[4038]: I0312 20:48:23.744311 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.744498 master-0 kubenswrapper[4038]: I0312 20:48:23.744447 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.745876 master-0 kubenswrapper[4038]: I0312 20:48:23.744836 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.751575 master-0 kubenswrapper[4038]: I0312 20:48:23.748037 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 20:48:23.752115 master-0 kubenswrapper[4038]: I0312 20:48:23.752058 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 12 20:48:23.752174 master-0 kubenswrapper[4038]: I0312 20:48:23.752082 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 12 20:48:23.752605 master-0 kubenswrapper[4038]: I0312 20:48:23.752550 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 20:48:23.752819 master-0 kubenswrapper[4038]: I0312 20:48:23.752771 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 20:48:23.753168 master-0 kubenswrapper[4038]: I0312 20:48:23.753130 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 20:48:23.754743 master-0 kubenswrapper[4038]: I0312 20:48:23.754667 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 12 20:48:23.756311 master-0 kubenswrapper[4038]: I0312 20:48:23.756266 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 20:48:23.756383 master-0 kubenswrapper[4038]: I0312 20:48:23.756335 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 20:48:23.756580 master-0 kubenswrapper[4038]: I0312 20:48:23.756442 4038 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 12 20:48:23.837043 master-0 kubenswrapper[4038]: I0312 20:48:23.836945 4038 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 20:48:23.852080 master-0 kubenswrapper[4038]: I0312 20:48:23.852010 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-ca-bundle\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.852080 master-0 kubenswrapper[4038]: I0312 20:48:23.852077 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.852427 master-0 kubenswrapper[4038]: I0312 20:48:23.852109 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-sno-bootstrap-files\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.852427 master-0 kubenswrapper[4038]: I0312 20:48:23.852137 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.852427 master-0 kubenswrapper[4038]: I0312 20:48:23.852158 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.852427 master-0 kubenswrapper[4038]: I0312 20:48:23.852182 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-resolv-conf\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.852427 master-0 kubenswrapper[4038]: I0312 20:48:23.852202 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.852427 master-0 kubenswrapper[4038]: I0312 20:48:23.852227 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.853060 master-0 kubenswrapper[4038]: I0312 20:48:23.853001 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.853283 master-0 kubenswrapper[4038]: I0312 20:48:23.853249 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlqk\" (UniqueName: \"kubernetes.io/projected/d87b7a20-047e-4521-996c-9b11d81e9bd0-kube-api-access-sqlqk\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.853438 master-0 kubenswrapper[4038]: I0312 20:48:23.853414 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.853583 master-0 kubenswrapper[4038]: I0312 20:48:23.853559 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.853737 master-0 kubenswrapper[4038]: I0312 20:48:23.853713 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.954199 master-0 kubenswrapper[4038]: I0312 20:48:23.954106 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-resolv-conf\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.954199 master-0 kubenswrapper[4038]: I0312 20:48:23.954180 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.954199 master-0 kubenswrapper[4038]: I0312 20:48:23.954217 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.954668 master-0 kubenswrapper[4038]: I0312 20:48:23.954249 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.954959 master-0 kubenswrapper[4038]: I0312 20:48:23.954772 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.955147 master-0 kubenswrapper[4038]: I0312 20:48:23.955041 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlqk\" (UniqueName: \"kubernetes.io/projected/d87b7a20-047e-4521-996c-9b11d81e9bd0-kube-api-access-sqlqk\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955147 master-0 kubenswrapper[4038]: I0312 20:48:23.955095 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-resolv-conf\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955147 master-0 kubenswrapper[4038]: I0312 20:48:23.955104 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.955345 master-0 kubenswrapper[4038]: I0312 20:48:23.955194 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955345 master-0 kubenswrapper[4038]: I0312 20:48:23.955238 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.955345 master-0 kubenswrapper[4038]: I0312 20:48:23.955270 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-ca-bundle\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955345 master-0 kubenswrapper[4038]: I0312 20:48:23.955297 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.955345 master-0 kubenswrapper[4038]: E0312 20:48:23.955298 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:23.955345 master-0 kubenswrapper[4038]: I0312 20:48:23.955335 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-sno-bootstrap-files\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: I0312 20:48:23.955367 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: E0312 20:48:23.955541 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:24.455376454 +0000 UTC m=+42.491058507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: I0312 20:48:23.955582 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: I0312 20:48:23.955661 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-ca-bundle\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: I0312 20:48:23.955709 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: I0312 20:48:23.955749 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.955924 master-0 kubenswrapper[4038]: I0312 20:48:23.955799 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.956451 master-0 kubenswrapper[4038]: I0312 20:48:23.955979 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-sno-bootstrap-files\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.956451 master-0 kubenswrapper[4038]: I0312 20:48:23.955874 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:23.957864 master-0 kubenswrapper[4038]: I0312 20:48:23.957577 4038 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 20:48:23.966169 master-0 kubenswrapper[4038]: I0312 20:48:23.965607 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:23.986324 master-0 kubenswrapper[4038]: I0312 20:48:23.986211 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlqk\" (UniqueName: \"kubernetes.io/projected/d87b7a20-047e-4521-996c-9b11d81e9bd0-kube-api-access-sqlqk\") pod \"assisted-installer-controller-jffs8\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:23.997623 master-0 kubenswrapper[4038]: I0312 20:48:23.997473 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:24.002221 master-0 kubenswrapper[4038]: I0312 20:48:24.002100 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:24.070726 master-0 kubenswrapper[4038]: I0312 20:48:24.070625 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:48:24.100478 master-0 kubenswrapper[4038]: I0312 20:48:24.100420 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:24.116448 master-0 kubenswrapper[4038]: W0312 20:48:24.116381 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd87b7a20_047e_4521_996c_9b11d81e9bd0.slice/crio-f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d WatchSource:0}: Error finding container f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d: Status 404 returned error can't find the container with id f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d Mar 12 20:48:24.321915 master-0 kubenswrapper[4038]: I0312 20:48:24.321782 4038 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 15:22:39.181645622 +0000 UTC Mar 12 20:48:24.321915 master-0 kubenswrapper[4038]: I0312 20:48:24.321881 4038 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h34m14.8597721s for next certificate rotation Mar 12 20:48:24.460448 master-0 kubenswrapper[4038]: I0312 20:48:24.460336 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:24.460862 master-0 kubenswrapper[4038]: E0312 20:48:24.460499 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:24.460862 master-0 kubenswrapper[4038]: E0312 20:48:24.460564 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:25.460547026 +0000 UTC m=+43.496228889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:25.083650 master-0 kubenswrapper[4038]: I0312 20:48:25.083572 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-jffs8" event={"ID":"d87b7a20-047e-4521-996c-9b11d81e9bd0","Type":"ContainerStarted","Data":"f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d"} Mar 12 20:48:25.084794 master-0 kubenswrapper[4038]: I0312 20:48:25.084751 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" event={"ID":"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6","Type":"ContainerStarted","Data":"4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517"} Mar 12 20:48:25.322975 master-0 kubenswrapper[4038]: I0312 20:48:25.322869 4038 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 13:37:54.261567051 +0000 UTC Mar 12 20:48:25.322975 master-0 kubenswrapper[4038]: I0312 20:48:25.322930 4038 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h49m28.938640669s for next certificate rotation Mar 12 20:48:25.469369 master-0 kubenswrapper[4038]: I0312 20:48:25.469228 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:25.469601 master-0 kubenswrapper[4038]: E0312 20:48:25.469389 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:25.469601 master-0 kubenswrapper[4038]: E0312 20:48:25.469455 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:27.469436116 +0000 UTC m=+45.505117979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:25.559450 master-0 kubenswrapper[4038]: I0312 20:48:25.559409 4038 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 20:48:27.484384 master-0 kubenswrapper[4038]: I0312 20:48:27.484305 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:27.485420 master-0 kubenswrapper[4038]: E0312 20:48:27.484570 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:27.485420 master-0 kubenswrapper[4038]: E0312 20:48:27.484722 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:31.484687115 +0000 UTC m=+49.520369018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:29.097635 master-0 kubenswrapper[4038]: I0312 20:48:29.097537 4038 generic.go:334] "Generic (PLEG): container finished" podID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerID="2782822a08b1aa7b74a8813bdda6c24b76842bfecde841229b05dc04dcc388f3" exitCode=0 Mar 12 20:48:29.097635 master-0 kubenswrapper[4038]: I0312 20:48:29.097605 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-jffs8" event={"ID":"d87b7a20-047e-4521-996c-9b11d81e9bd0","Type":"ContainerDied","Data":"2782822a08b1aa7b74a8813bdda6c24b76842bfecde841229b05dc04dcc388f3"} Mar 12 20:48:29.897729 master-0 kubenswrapper[4038]: I0312 20:48:29.897670 4038 scope.go:117] "RemoveContainer" containerID="faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e" Mar 12 20:48:29.899289 master-0 kubenswrapper[4038]: I0312 20:48:29.899208 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 12 20:48:30.103145 master-0 kubenswrapper[4038]: I0312 20:48:30.102946 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" event={"ID":"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6","Type":"ContainerStarted","Data":"d9fa8a123cfb8c14404c75a08b2365da17bc3d4b0cf2e193ac612689b8a4fc37"} Mar 12 20:48:30.127779 master-0 kubenswrapper[4038]: I0312 20:48:30.127401 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" podStartSLOduration=7.417704262 podStartE2EDuration="12.127374273s" podCreationTimestamp="2026-03-12 20:48:18 +0000 UTC" firstStartedPulling="2026-03-12 20:48:24.094551729 +0000 UTC m=+42.130233602" lastFinishedPulling="2026-03-12 20:48:28.80422172 +0000 UTC m=+46.839903613" observedRunningTime="2026-03-12 20:48:30.12728346 +0000 UTC m=+48.162965333" watchObservedRunningTime="2026-03-12 20:48:30.127374273 +0000 UTC m=+48.163056146" Mar 12 20:48:30.155494 master-0 kubenswrapper[4038]: I0312 20:48:30.155444 4038 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:30.204529 master-0 kubenswrapper[4038]: I0312 20:48:30.204452 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqlqk\" (UniqueName: \"kubernetes.io/projected/d87b7a20-047e-4521-996c-9b11d81e9bd0-kube-api-access-sqlqk\") pod \"d87b7a20-047e-4521-996c-9b11d81e9bd0\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " Mar 12 20:48:30.204529 master-0 kubenswrapper[4038]: I0312 20:48:30.204509 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-sno-bootstrap-files\") pod \"d87b7a20-047e-4521-996c-9b11d81e9bd0\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " Mar 12 20:48:30.204529 master-0 kubenswrapper[4038]: I0312 20:48:30.204529 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-ca-bundle\") pod \"d87b7a20-047e-4521-996c-9b11d81e9bd0\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204551 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-resolv-conf\") pod \"d87b7a20-047e-4521-996c-9b11d81e9bd0\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204573 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-var-run-resolv-conf\") pod \"d87b7a20-047e-4521-996c-9b11d81e9bd0\" (UID: \"d87b7a20-047e-4521-996c-9b11d81e9bd0\") " Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204625 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "d87b7a20-047e-4521-996c-9b11d81e9bd0" (UID: "d87b7a20-047e-4521-996c-9b11d81e9bd0"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204618 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "d87b7a20-047e-4521-996c-9b11d81e9bd0" (UID: "d87b7a20-047e-4521-996c-9b11d81e9bd0"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204668 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "d87b7a20-047e-4521-996c-9b11d81e9bd0" (UID: "d87b7a20-047e-4521-996c-9b11d81e9bd0"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204725 4038 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 20:48:30.204860 master-0 kubenswrapper[4038]: I0312 20:48:30.204698 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "d87b7a20-047e-4521-996c-9b11d81e9bd0" (UID: "d87b7a20-047e-4521-996c-9b11d81e9bd0"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:48:30.209918 master-0 kubenswrapper[4038]: I0312 20:48:30.209414 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87b7a20-047e-4521-996c-9b11d81e9bd0-kube-api-access-sqlqk" (OuterVolumeSpecName: "kube-api-access-sqlqk") pod "d87b7a20-047e-4521-996c-9b11d81e9bd0" (UID: "d87b7a20-047e-4521-996c-9b11d81e9bd0"). InnerVolumeSpecName "kube-api-access-sqlqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:48:30.305571 master-0 kubenswrapper[4038]: I0312 20:48:30.305433 4038 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 12 20:48:30.305571 master-0 kubenswrapper[4038]: I0312 20:48:30.305489 4038 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqlqk\" (UniqueName: \"kubernetes.io/projected/d87b7a20-047e-4521-996c-9b11d81e9bd0-kube-api-access-sqlqk\") on node \"master-0\" DevicePath \"\"" Mar 12 20:48:30.305571 master-0 kubenswrapper[4038]: I0312 20:48:30.305503 4038 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 12 20:48:30.305571 master-0 kubenswrapper[4038]: I0312 20:48:30.305517 4038 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/d87b7a20-047e-4521-996c-9b11d81e9bd0-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 12 20:48:31.108871 master-0 kubenswrapper[4038]: I0312 20:48:31.108774 4038 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:48:31.109839 master-0 kubenswrapper[4038]: I0312 20:48:31.108758 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-jffs8" event={"ID":"d87b7a20-047e-4521-996c-9b11d81e9bd0","Type":"ContainerDied","Data":"f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d"} Mar 12 20:48:31.109839 master-0 kubenswrapper[4038]: I0312 20:48:31.109003 4038 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d" Mar 12 20:48:31.113377 master-0 kubenswrapper[4038]: I0312 20:48:31.113320 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 20:48:31.114295 master-0 kubenswrapper[4038]: I0312 20:48:31.114225 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"6f5c19a3178e0ac81f6a0a19cf655238a7d3c02526a49af4ee450188873df923"} Mar 12 20:48:31.230749 master-0 kubenswrapper[4038]: I0312 20:48:31.230529 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=2.230490457 podStartE2EDuration="2.230490457s" podCreationTimestamp="2026-03-12 20:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:48:31.132685566 +0000 UTC m=+49.168367449" watchObservedRunningTime="2026-03-12 20:48:31.230490457 +0000 UTC m=+49.266172370" Mar 12 20:48:31.516935 master-0 kubenswrapper[4038]: I0312 20:48:31.515936 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:31.516935 master-0 kubenswrapper[4038]: E0312 20:48:31.516177 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:31.516935 master-0 kubenswrapper[4038]: E0312 20:48:31.516311 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:39.51628055 +0000 UTC m=+57.551962453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:32.035032 master-0 kubenswrapper[4038]: I0312 20:48:32.034964 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-6d5g7"] Mar 12 20:48:32.035299 master-0 kubenswrapper[4038]: E0312 20:48:32.035070 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 20:48:32.035299 master-0 kubenswrapper[4038]: I0312 20:48:32.035087 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 20:48:32.035299 master-0 kubenswrapper[4038]: I0312 20:48:32.035116 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 20:48:32.035419 master-0 kubenswrapper[4038]: I0312 20:48:32.035307 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:32.119790 master-0 kubenswrapper[4038]: I0312 20:48:32.119673 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsgv9\" (UniqueName: \"kubernetes.io/projected/4730d5f8-ab17-4ba2-ae27-d2de62821372-kube-api-access-xsgv9\") pod \"mtu-prober-6d5g7\" (UID: \"4730d5f8-ab17-4ba2-ae27-d2de62821372\") " pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:32.220522 master-0 kubenswrapper[4038]: I0312 20:48:32.220428 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsgv9\" (UniqueName: \"kubernetes.io/projected/4730d5f8-ab17-4ba2-ae27-d2de62821372-kube-api-access-xsgv9\") pod \"mtu-prober-6d5g7\" (UID: \"4730d5f8-ab17-4ba2-ae27-d2de62821372\") " pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:32.256663 master-0 kubenswrapper[4038]: I0312 20:48:32.256561 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsgv9\" (UniqueName: \"kubernetes.io/projected/4730d5f8-ab17-4ba2-ae27-d2de62821372-kube-api-access-xsgv9\") pod \"mtu-prober-6d5g7\" (UID: \"4730d5f8-ab17-4ba2-ae27-d2de62821372\") " pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:32.354351 master-0 kubenswrapper[4038]: I0312 20:48:32.354256 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:32.366554 master-0 kubenswrapper[4038]: W0312 20:48:32.366473 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4730d5f8_ab17_4ba2_ae27_d2de62821372.slice/crio-4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22 WatchSource:0}: Error finding container 4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22: Status 404 returned error can't find the container with id 4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22 Mar 12 20:48:33.122527 master-0 kubenswrapper[4038]: I0312 20:48:33.122398 4038 generic.go:334] "Generic (PLEG): container finished" podID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerID="53c0edcd8673398e4384f928bbaa2737b8e228fa73c0aad115798fc1550e14b6" exitCode=0 Mar 12 20:48:33.122527 master-0 kubenswrapper[4038]: I0312 20:48:33.122482 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-6d5g7" event={"ID":"4730d5f8-ab17-4ba2-ae27-d2de62821372","Type":"ContainerDied","Data":"53c0edcd8673398e4384f928bbaa2737b8e228fa73c0aad115798fc1550e14b6"} Mar 12 20:48:33.122527 master-0 kubenswrapper[4038]: I0312 20:48:33.122534 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-6d5g7" event={"ID":"4730d5f8-ab17-4ba2-ae27-d2de62821372","Type":"ContainerStarted","Data":"4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22"} Mar 12 20:48:34.150715 master-0 kubenswrapper[4038]: I0312 20:48:34.150273 4038 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:34.237850 master-0 kubenswrapper[4038]: I0312 20:48:34.237712 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsgv9\" (UniqueName: \"kubernetes.io/projected/4730d5f8-ab17-4ba2-ae27-d2de62821372-kube-api-access-xsgv9\") pod \"4730d5f8-ab17-4ba2-ae27-d2de62821372\" (UID: \"4730d5f8-ab17-4ba2-ae27-d2de62821372\") " Mar 12 20:48:34.245170 master-0 kubenswrapper[4038]: I0312 20:48:34.245072 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4730d5f8-ab17-4ba2-ae27-d2de62821372-kube-api-access-xsgv9" (OuterVolumeSpecName: "kube-api-access-xsgv9") pod "4730d5f8-ab17-4ba2-ae27-d2de62821372" (UID: "4730d5f8-ab17-4ba2-ae27-d2de62821372"). InnerVolumeSpecName "kube-api-access-xsgv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:48:34.338238 master-0 kubenswrapper[4038]: I0312 20:48:34.338179 4038 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsgv9\" (UniqueName: \"kubernetes.io/projected/4730d5f8-ab17-4ba2-ae27-d2de62821372-kube-api-access-xsgv9\") on node \"master-0\" DevicePath \"\"" Mar 12 20:48:35.130708 master-0 kubenswrapper[4038]: I0312 20:48:35.130608 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-6d5g7" event={"ID":"4730d5f8-ab17-4ba2-ae27-d2de62821372","Type":"ContainerDied","Data":"4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22"} Mar 12 20:48:35.130708 master-0 kubenswrapper[4038]: I0312 20:48:35.130668 4038 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22" Mar 12 20:48:35.130708 master-0 kubenswrapper[4038]: I0312 20:48:35.130684 4038 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6d5g7" Mar 12 20:48:37.050465 master-0 kubenswrapper[4038]: I0312 20:48:37.050372 4038 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-6d5g7"] Mar 12 20:48:37.058014 master-0 kubenswrapper[4038]: I0312 20:48:37.057930 4038 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-6d5g7"] Mar 12 20:48:38.884236 master-0 kubenswrapper[4038]: I0312 20:48:38.884142 4038 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" path="/var/lib/kubelet/pods/4730d5f8-ab17-4ba2-ae27-d2de62821372/volumes" Mar 12 20:48:39.577466 master-0 kubenswrapper[4038]: I0312 20:48:39.577316 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:39.577466 master-0 kubenswrapper[4038]: E0312 20:48:39.577514 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:39.578163 master-0 kubenswrapper[4038]: E0312 20:48:39.577618 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:55.577590871 +0000 UTC m=+73.613272764 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:41.928962 master-0 kubenswrapper[4038]: I0312 20:48:41.928366 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gnmmm"] Mar 12 20:48:41.928962 master-0 kubenswrapper[4038]: E0312 20:48:41.928541 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerName="prober" Mar 12 20:48:41.928962 master-0 kubenswrapper[4038]: I0312 20:48:41.928561 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerName="prober" Mar 12 20:48:41.928962 master-0 kubenswrapper[4038]: I0312 20:48:41.928606 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerName="prober" Mar 12 20:48:41.928962 master-0 kubenswrapper[4038]: I0312 20:48:41.928950 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.930975 master-0 kubenswrapper[4038]: I0312 20:48:41.930909 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 20:48:41.931910 master-0 kubenswrapper[4038]: I0312 20:48:41.931845 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 20:48:41.932250 master-0 kubenswrapper[4038]: I0312 20:48:41.931900 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 20:48:41.932250 master-0 kubenswrapper[4038]: I0312 20:48:41.932219 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 20:48:41.996006 master-0 kubenswrapper[4038]: I0312 20:48:41.995895 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996006 master-0 kubenswrapper[4038]: I0312 20:48:41.995990 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996347 master-0 kubenswrapper[4038]: I0312 20:48:41.996040 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996347 master-0 kubenswrapper[4038]: I0312 20:48:41.996126 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996347 master-0 kubenswrapper[4038]: I0312 20:48:41.996169 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996347 master-0 kubenswrapper[4038]: I0312 20:48:41.996207 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996347 master-0 kubenswrapper[4038]: I0312 20:48:41.996244 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996347 master-0 kubenswrapper[4038]: I0312 20:48:41.996285 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996613 master-0 kubenswrapper[4038]: I0312 20:48:41.996419 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996613 master-0 kubenswrapper[4038]: I0312 20:48:41.996471 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996613 master-0 kubenswrapper[4038]: I0312 20:48:41.996488 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996747 master-0 kubenswrapper[4038]: I0312 20:48:41.996540 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996747 master-0 kubenswrapper[4038]: I0312 20:48:41.996688 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996747 master-0 kubenswrapper[4038]: I0312 20:48:41.996723 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996938 master-0 kubenswrapper[4038]: I0312 20:48:41.996761 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996938 master-0 kubenswrapper[4038]: I0312 20:48:41.996798 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:41.996938 master-0 kubenswrapper[4038]: I0312 20:48:41.996913 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097369 master-0 kubenswrapper[4038]: I0312 20:48:42.097267 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097369 master-0 kubenswrapper[4038]: I0312 20:48:42.097380 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097678 master-0 kubenswrapper[4038]: I0312 20:48:42.097455 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097678 master-0 kubenswrapper[4038]: I0312 20:48:42.097460 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097678 master-0 kubenswrapper[4038]: I0312 20:48:42.097505 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097864 master-0 kubenswrapper[4038]: I0312 20:48:42.097665 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097864 master-0 kubenswrapper[4038]: I0312 20:48:42.097766 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097959 master-0 kubenswrapper[4038]: I0312 20:48:42.097885 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.097959 master-0 kubenswrapper[4038]: I0312 20:48:42.097889 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098039 master-0 kubenswrapper[4038]: I0312 20:48:42.097954 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098039 master-0 kubenswrapper[4038]: I0312 20:48:42.097801 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098039 master-0 kubenswrapper[4038]: I0312 20:48:42.097958 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098039 master-0 kubenswrapper[4038]: I0312 20:48:42.098011 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098201 master-0 kubenswrapper[4038]: I0312 20:48:42.098046 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098201 master-0 kubenswrapper[4038]: I0312 20:48:42.098008 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098201 master-0 kubenswrapper[4038]: I0312 20:48:42.098083 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098201 master-0 kubenswrapper[4038]: I0312 20:48:42.098168 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098201 master-0 kubenswrapper[4038]: I0312 20:48:42.098186 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098370 master-0 kubenswrapper[4038]: I0312 20:48:42.098194 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098370 master-0 kubenswrapper[4038]: I0312 20:48:42.098233 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098370 master-0 kubenswrapper[4038]: I0312 20:48:42.098245 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098370 master-0 kubenswrapper[4038]: I0312 20:48:42.097903 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098370 master-0 kubenswrapper[4038]: I0312 20:48:42.098275 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098583 master-0 kubenswrapper[4038]: I0312 20:48:42.098531 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098637 master-0 kubenswrapper[4038]: I0312 20:48:42.098595 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098716 master-0 kubenswrapper[4038]: I0312 20:48:42.098671 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098875 master-0 kubenswrapper[4038]: I0312 20:48:42.098831 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098945 master-0 kubenswrapper[4038]: I0312 20:48:42.098909 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.098999 master-0 kubenswrapper[4038]: I0312 20:48:42.098962 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.099037 master-0 kubenswrapper[4038]: I0312 20:48:42.098972 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.099235 master-0 kubenswrapper[4038]: I0312 20:48:42.099192 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.100283 master-0 kubenswrapper[4038]: I0312 20:48:42.100238 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.100355 master-0 kubenswrapper[4038]: I0312 20:48:42.100302 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.127642 master-0 kubenswrapper[4038]: I0312 20:48:42.127561 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.134341 master-0 kubenswrapper[4038]: I0312 20:48:42.134291 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-trlxw"] Mar 12 20:48:42.134873 master-0 kubenswrapper[4038]: I0312 20:48:42.134840 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.137305 master-0 kubenswrapper[4038]: I0312 20:48:42.137261 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 20:48:42.137402 master-0 kubenswrapper[4038]: I0312 20:48:42.137382 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 20:48:42.200324 master-0 kubenswrapper[4038]: I0312 20:48:42.200121 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200324 master-0 kubenswrapper[4038]: I0312 20:48:42.200195 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200324 master-0 kubenswrapper[4038]: I0312 20:48:42.200235 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200650 master-0 kubenswrapper[4038]: I0312 20:48:42.200322 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200650 master-0 kubenswrapper[4038]: I0312 20:48:42.200404 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200650 master-0 kubenswrapper[4038]: I0312 20:48:42.200437 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200650 master-0 kubenswrapper[4038]: I0312 20:48:42.200467 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.200650 master-0 kubenswrapper[4038]: I0312 20:48:42.200590 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.247648 master-0 kubenswrapper[4038]: I0312 20:48:42.247523 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gnmmm" Mar 12 20:48:42.301418 master-0 kubenswrapper[4038]: I0312 20:48:42.301326 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.301919 master-0 kubenswrapper[4038]: I0312 20:48:42.301882 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.301974 master-0 kubenswrapper[4038]: I0312 20:48:42.301929 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.301974 master-0 kubenswrapper[4038]: I0312 20:48:42.301959 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302106 master-0 kubenswrapper[4038]: I0312 20:48:42.301983 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302106 master-0 kubenswrapper[4038]: I0312 20:48:42.302008 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302106 master-0 kubenswrapper[4038]: I0312 20:48:42.302027 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302106 master-0 kubenswrapper[4038]: I0312 20:48:42.302045 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302424 master-0 kubenswrapper[4038]: I0312 20:48:42.302388 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302562 master-0 kubenswrapper[4038]: I0312 20:48:42.302537 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.302632 master-0 kubenswrapper[4038]: I0312 20:48:42.302578 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.303255 master-0 kubenswrapper[4038]: I0312 20:48:42.303229 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.303330 master-0 kubenswrapper[4038]: I0312 20:48:42.303302 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.303474 master-0 kubenswrapper[4038]: I0312 20:48:42.303426 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.303823 master-0 kubenswrapper[4038]: I0312 20:48:42.303778 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.331300 master-0 kubenswrapper[4038]: I0312 20:48:42.331228 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.456227 master-0 kubenswrapper[4038]: I0312 20:48:42.455403 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:48:42.467930 master-0 kubenswrapper[4038]: W0312 20:48:42.467874 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2545a80_0f00_4b19_ab3b_a9aa4bff98e8.slice/crio-c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0 WatchSource:0}: Error finding container c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0: Status 404 returned error can't find the container with id c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0 Mar 12 20:48:42.919452 master-0 kubenswrapper[4038]: I0312 20:48:42.919371 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-brdcd"] Mar 12 20:48:42.920460 master-0 kubenswrapper[4038]: I0312 20:48:42.920438 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:42.920641 master-0 kubenswrapper[4038]: E0312 20:48:42.920612 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:43.006326 master-0 kubenswrapper[4038]: I0312 20:48:43.006256 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:43.006990 master-0 kubenswrapper[4038]: I0312 20:48:43.006414 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:43.107564 master-0 kubenswrapper[4038]: I0312 20:48:43.107455 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:43.107564 master-0 kubenswrapper[4038]: I0312 20:48:43.107513 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:43.107958 master-0 kubenswrapper[4038]: E0312 20:48:43.107711 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:43.107958 master-0 kubenswrapper[4038]: E0312 20:48:43.107881 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:48:43.607841746 +0000 UTC m=+61.643523609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:43.129121 master-0 kubenswrapper[4038]: I0312 20:48:43.128997 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:43.165495 master-0 kubenswrapper[4038]: I0312 20:48:43.165401 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerStarted","Data":"c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0"} Mar 12 20:48:43.167058 master-0 kubenswrapper[4038]: I0312 20:48:43.166989 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnmmm" event={"ID":"70e54b24-bf9d-42a8-b012-c7b073c6f6a6","Type":"ContainerStarted","Data":"e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67"} Mar 12 20:48:43.620348 master-0 kubenswrapper[4038]: I0312 20:48:43.620272 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:43.620624 master-0 kubenswrapper[4038]: E0312 20:48:43.620452 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:43.620624 master-0 kubenswrapper[4038]: E0312 20:48:43.620554 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:48:44.620527303 +0000 UTC m=+62.656209166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:44.629975 master-0 kubenswrapper[4038]: I0312 20:48:44.629900 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:44.630983 master-0 kubenswrapper[4038]: E0312 20:48:44.630075 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:44.630983 master-0 kubenswrapper[4038]: E0312 20:48:44.630147 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:48:46.63013198 +0000 UTC m=+64.665813843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:44.882438 master-0 kubenswrapper[4038]: I0312 20:48:44.882284 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:44.882662 master-0 kubenswrapper[4038]: E0312 20:48:44.882466 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:46.178798 master-0 kubenswrapper[4038]: I0312 20:48:46.178713 4038 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="f1489aa28f1df9edd0eec54c9b66a8a7e1d73e8d6be27d02b6cab3f145aeea26" exitCode=0 Mar 12 20:48:46.178798 master-0 kubenswrapper[4038]: I0312 20:48:46.178778 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerDied","Data":"f1489aa28f1df9edd0eec54c9b66a8a7e1d73e8d6be27d02b6cab3f145aeea26"} Mar 12 20:48:46.650163 master-0 kubenswrapper[4038]: I0312 20:48:46.650099 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:46.650508 master-0 kubenswrapper[4038]: E0312 20:48:46.650235 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:46.650508 master-0 kubenswrapper[4038]: E0312 20:48:46.650304 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:48:50.650283988 +0000 UTC m=+68.685965861 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:46.879834 master-0 kubenswrapper[4038]: I0312 20:48:46.879732 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:46.880082 master-0 kubenswrapper[4038]: E0312 20:48:46.880005 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:48.880347 master-0 kubenswrapper[4038]: I0312 20:48:48.880192 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:48.884733 master-0 kubenswrapper[4038]: E0312 20:48:48.880445 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:50.685183 master-0 kubenswrapper[4038]: I0312 20:48:50.685124 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:50.686069 master-0 kubenswrapper[4038]: E0312 20:48:50.685283 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:50.686069 master-0 kubenswrapper[4038]: E0312 20:48:50.685350 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:48:58.685333834 +0000 UTC m=+76.721015697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:50.881896 master-0 kubenswrapper[4038]: I0312 20:48:50.881827 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:50.882223 master-0 kubenswrapper[4038]: E0312 20:48:50.881998 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:52.881158 master-0 kubenswrapper[4038]: I0312 20:48:52.879113 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:52.881158 master-0 kubenswrapper[4038]: E0312 20:48:52.879581 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:54.322482 master-0 kubenswrapper[4038]: I0312 20:48:54.322099 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t"] Mar 12 20:48:54.323226 master-0 kubenswrapper[4038]: I0312 20:48:54.322924 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.325401 master-0 kubenswrapper[4038]: I0312 20:48:54.325361 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 20:48:54.325495 master-0 kubenswrapper[4038]: I0312 20:48:54.325435 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 20:48:54.325495 master-0 kubenswrapper[4038]: I0312 20:48:54.325452 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 20:48:54.326239 master-0 kubenswrapper[4038]: I0312 20:48:54.325767 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 20:48:54.326239 master-0 kubenswrapper[4038]: I0312 20:48:54.325798 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 20:48:54.458861 master-0 kubenswrapper[4038]: I0312 20:48:54.458533 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.458861 master-0 kubenswrapper[4038]: I0312 20:48:54.458573 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.458861 master-0 kubenswrapper[4038]: I0312 20:48:54.458594 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.458861 master-0 kubenswrapper[4038]: I0312 20:48:54.458631 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.528847 master-0 kubenswrapper[4038]: I0312 20:48:54.528769 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr664"] Mar 12 20:48:54.530381 master-0 kubenswrapper[4038]: I0312 20:48:54.530256 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.533221 master-0 kubenswrapper[4038]: I0312 20:48:54.533004 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 20:48:54.533221 master-0 kubenswrapper[4038]: I0312 20:48:54.533130 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 20:48:54.559840 master-0 kubenswrapper[4038]: I0312 20:48:54.559773 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.559840 master-0 kubenswrapper[4038]: I0312 20:48:54.559834 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.559840 master-0 kubenswrapper[4038]: I0312 20:48:54.559863 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.560260 master-0 kubenswrapper[4038]: I0312 20:48:54.559893 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.561025 master-0 kubenswrapper[4038]: I0312 20:48:54.560997 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.561151 master-0 kubenswrapper[4038]: I0312 20:48:54.561038 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.566007 master-0 kubenswrapper[4038]: I0312 20:48:54.565942 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.581445 master-0 kubenswrapper[4038]: I0312 20:48:54.581368 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.644450 master-0 kubenswrapper[4038]: I0312 20:48:54.644369 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:48:54.660522 master-0 kubenswrapper[4038]: I0312 20:48:54.660474 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-ovn\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.660750 master-0 kubenswrapper[4038]: I0312 20:48:54.660598 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-netd\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.660750 master-0 kubenswrapper[4038]: I0312 20:48:54.660661 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-config\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.660750 master-0 kubenswrapper[4038]: I0312 20:48:54.660694 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-netns\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.660750 master-0 kubenswrapper[4038]: I0312 20:48:54.660727 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-script-lib\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661019 master-0 kubenswrapper[4038]: I0312 20:48:54.660781 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-log-socket\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661019 master-0 kubenswrapper[4038]: I0312 20:48:54.660849 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-etc-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661019 master-0 kubenswrapper[4038]: I0312 20:48:54.660880 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-systemd\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661019 master-0 kubenswrapper[4038]: I0312 20:48:54.660982 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661246 master-0 kubenswrapper[4038]: I0312 20:48:54.661031 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-var-lib-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661246 master-0 kubenswrapper[4038]: I0312 20:48:54.661052 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661246 master-0 kubenswrapper[4038]: I0312 20:48:54.661070 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-systemd-units\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661246 master-0 kubenswrapper[4038]: I0312 20:48:54.661087 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e737121-cc77-4d22-a628-c4b4406b4698-ovn-node-metrics-cert\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661246 master-0 kubenswrapper[4038]: I0312 20:48:54.661129 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661246 master-0 kubenswrapper[4038]: I0312 20:48:54.661212 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-bin\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661569 master-0 kubenswrapper[4038]: I0312 20:48:54.661275 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-slash\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661569 master-0 kubenswrapper[4038]: I0312 20:48:54.661338 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-env-overrides\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661569 master-0 kubenswrapper[4038]: I0312 20:48:54.661383 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-node-log\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661569 master-0 kubenswrapper[4038]: I0312 20:48:54.661425 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-kubelet\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.661569 master-0 kubenswrapper[4038]: I0312 20:48:54.661451 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9zx\" (UniqueName: \"kubernetes.io/projected/6e737121-cc77-4d22-a628-c4b4406b4698-kube-api-access-zq9zx\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763007 master-0 kubenswrapper[4038]: I0312 20:48:54.762785 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-etc-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763007 master-0 kubenswrapper[4038]: I0312 20:48:54.762890 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-systemd\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763324 master-0 kubenswrapper[4038]: I0312 20:48:54.763020 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-etc-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763324 master-0 kubenswrapper[4038]: I0312 20:48:54.763081 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-var-lib-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763324 master-0 kubenswrapper[4038]: I0312 20:48:54.763170 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-var-lib-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763324 master-0 kubenswrapper[4038]: I0312 20:48:54.763234 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-systemd\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763324 master-0 kubenswrapper[4038]: I0312 20:48:54.763270 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763324 master-0 kubenswrapper[4038]: I0312 20:48:54.763303 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763337 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-systemd-units\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763370 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e737121-cc77-4d22-a628-c4b4406b4698-ovn-node-metrics-cert\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763421 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763452 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-bin\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763488 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-slash\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763533 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-env-overrides\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763562 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-node-log\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763591 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-kubelet\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763623 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9zx\" (UniqueName: \"kubernetes.io/projected/6e737121-cc77-4d22-a628-c4b4406b4698-kube-api-access-zq9zx\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763656 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-ovn\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763686 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-netd\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763714 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-config\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763744 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-script-lib\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.763799 master-0 kubenswrapper[4038]: I0312 20:48:54.763782 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-netns\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.763958 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-openvswitch\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764035 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764053 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-ovn\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764091 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-netd\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764127 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-systemd-units\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764132 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-kubelet\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764521 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-node-log\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764556 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-netns\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764580 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-bin\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764603 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-ovn-kubernetes\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.764623 master-0 kubenswrapper[4038]: I0312 20:48:54.764628 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-slash\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.765308 master-0 kubenswrapper[4038]: I0312 20:48:54.764898 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-env-overrides\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.765308 master-0 kubenswrapper[4038]: I0312 20:48:54.763839 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-log-socket\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.765308 master-0 kubenswrapper[4038]: I0312 20:48:54.765080 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-log-socket\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.765308 master-0 kubenswrapper[4038]: I0312 20:48:54.765227 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-script-lib\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.765308 master-0 kubenswrapper[4038]: I0312 20:48:54.765302 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-config\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.767779 master-0 kubenswrapper[4038]: I0312 20:48:54.767720 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e737121-cc77-4d22-a628-c4b4406b4698-ovn-node-metrics-cert\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.787341 master-0 kubenswrapper[4038]: I0312 20:48:54.787289 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9zx\" (UniqueName: \"kubernetes.io/projected/6e737121-cc77-4d22-a628-c4b4406b4698-kube-api-access-zq9zx\") pod \"ovnkube-node-wr664\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.842002 master-0 kubenswrapper[4038]: I0312 20:48:54.841955 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:48:54.879677 master-0 kubenswrapper[4038]: I0312 20:48:54.879623 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:54.880360 master-0 kubenswrapper[4038]: E0312 20:48:54.880317 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:55.672793 master-0 kubenswrapper[4038]: I0312 20:48:55.672704 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:48:55.673449 master-0 kubenswrapper[4038]: E0312 20:48:55.672963 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:55.673449 master-0 kubenswrapper[4038]: E0312 20:48:55.673070 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:27.67304592 +0000 UTC m=+105.708727773 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:48:55.705174 master-0 kubenswrapper[4038]: W0312 20:48:55.705116 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd862a346_ec4d_46f6_a3e2_ea8759ea0111.slice/crio-dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c WatchSource:0}: Error finding container dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c: Status 404 returned error can't find the container with id dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c Mar 12 20:48:55.706640 master-0 kubenswrapper[4038]: W0312 20:48:55.706603 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e737121_cc77_4d22_a628_c4b4406b4698.slice/crio-bbde71f4d6a08e6432aff49678942efe1e239e2a38fc8d45e30b413ea5aea68e WatchSource:0}: Error finding container bbde71f4d6a08e6432aff49678942efe1e239e2a38fc8d45e30b413ea5aea68e: Status 404 returned error can't find the container with id bbde71f4d6a08e6432aff49678942efe1e239e2a38fc8d45e30b413ea5aea68e Mar 12 20:48:56.207485 master-0 kubenswrapper[4038]: I0312 20:48:56.207412 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gnmmm" event={"ID":"70e54b24-bf9d-42a8-b012-c7b073c6f6a6","Type":"ContainerStarted","Data":"9fc572fb2906a0e4d7e5e7d37a46ef927d6b526386f4fd873bd3dfab23934371"} Mar 12 20:48:56.208504 master-0 kubenswrapper[4038]: I0312 20:48:56.208459 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"bbde71f4d6a08e6432aff49678942efe1e239e2a38fc8d45e30b413ea5aea68e"} Mar 12 20:48:56.209692 master-0 kubenswrapper[4038]: I0312 20:48:56.209654 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerStarted","Data":"b75413bc2d68263f6450350d06d2669594d39d15aadd2dd7bce1526de1cf8079"} Mar 12 20:48:56.209803 master-0 kubenswrapper[4038]: I0312 20:48:56.209695 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerStarted","Data":"dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c"} Mar 12 20:48:56.212787 master-0 kubenswrapper[4038]: I0312 20:48:56.212739 4038 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="4ffd6f14ac61ffabe5bcfc6578f791f07638af2dede3fe79398a339525e37d25" exitCode=0 Mar 12 20:48:56.212787 master-0 kubenswrapper[4038]: I0312 20:48:56.212780 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerDied","Data":"4ffd6f14ac61ffabe5bcfc6578f791f07638af2dede3fe79398a339525e37d25"} Mar 12 20:48:56.229174 master-0 kubenswrapper[4038]: I0312 20:48:56.229056 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gnmmm" podStartSLOduration=1.718794897 podStartE2EDuration="15.229024514s" podCreationTimestamp="2026-03-12 20:48:41 +0000 UTC" firstStartedPulling="2026-03-12 20:48:42.264654785 +0000 UTC m=+60.300336688" lastFinishedPulling="2026-03-12 20:48:55.774884442 +0000 UTC m=+73.810566305" observedRunningTime="2026-03-12 20:48:56.226505811 +0000 UTC m=+74.262187684" watchObservedRunningTime="2026-03-12 20:48:56.229024514 +0000 UTC m=+74.264706417" Mar 12 20:48:56.879986 master-0 kubenswrapper[4038]: I0312 20:48:56.879934 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:56.880778 master-0 kubenswrapper[4038]: E0312 20:48:56.880098 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:57.513144 master-0 kubenswrapper[4038]: I0312 20:48:57.513043 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-h26wj"] Mar 12 20:48:57.514613 master-0 kubenswrapper[4038]: I0312 20:48:57.514538 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:48:57.514672 master-0 kubenswrapper[4038]: E0312 20:48:57.514639 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:48:57.690865 master-0 kubenswrapper[4038]: I0312 20:48:57.690243 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:48:57.791014 master-0 kubenswrapper[4038]: I0312 20:48:57.790837 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:48:57.803599 master-0 kubenswrapper[4038]: E0312 20:48:57.803535 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:48:57.803599 master-0 kubenswrapper[4038]: E0312 20:48:57.803584 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:48:57.803599 master-0 kubenswrapper[4038]: E0312 20:48:57.803600 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:48:57.803902 master-0 kubenswrapper[4038]: E0312 20:48:57.803674 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:58.303650864 +0000 UTC m=+76.339332727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:48:58.396934 master-0 kubenswrapper[4038]: I0312 20:48:58.396846 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:48:58.397414 master-0 kubenswrapper[4038]: E0312 20:48:58.397071 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:48:58.397414 master-0 kubenswrapper[4038]: E0312 20:48:58.397102 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:48:58.397414 master-0 kubenswrapper[4038]: E0312 20:48:58.397115 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:48:58.397414 master-0 kubenswrapper[4038]: E0312 20:48:58.397181 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:48:59.397162291 +0000 UTC m=+77.432844154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:48:58.699198 master-0 kubenswrapper[4038]: I0312 20:48:58.699111 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:58.699501 master-0 kubenswrapper[4038]: E0312 20:48:58.699295 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:58.699501 master-0 kubenswrapper[4038]: E0312 20:48:58.699408 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:49:14.699368885 +0000 UTC m=+92.735050748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:48:58.879976 master-0 kubenswrapper[4038]: I0312 20:48:58.879906 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:48:58.880262 master-0 kubenswrapper[4038]: I0312 20:48:58.880028 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:48:58.880262 master-0 kubenswrapper[4038]: E0312 20:48:58.880034 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:48:58.881488 master-0 kubenswrapper[4038]: E0312 20:48:58.880852 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:48:59.224631 master-0 kubenswrapper[4038]: I0312 20:48:59.224550 4038 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="f5be33e5e1cb19154b4137bf5e307d01b21c816569a4f493dfb02ba284a02c43" exitCode=0 Mar 12 20:48:59.224631 master-0 kubenswrapper[4038]: I0312 20:48:59.224617 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerDied","Data":"f5be33e5e1cb19154b4137bf5e307d01b21c816569a4f493dfb02ba284a02c43"} Mar 12 20:48:59.405799 master-0 kubenswrapper[4038]: I0312 20:48:59.405729 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:48:59.406868 master-0 kubenswrapper[4038]: E0312 20:48:59.406008 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:48:59.406868 master-0 kubenswrapper[4038]: E0312 20:48:59.406034 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:48:59.406868 master-0 kubenswrapper[4038]: E0312 20:48:59.406051 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:48:59.406868 master-0 kubenswrapper[4038]: E0312 20:48:59.406116 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:01.406095016 +0000 UTC m=+79.441776879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:00.883755 master-0 kubenswrapper[4038]: I0312 20:49:00.883677 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:00.884274 master-0 kubenswrapper[4038]: E0312 20:49:00.883909 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:00.884568 master-0 kubenswrapper[4038]: I0312 20:49:00.884544 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:00.884672 master-0 kubenswrapper[4038]: E0312 20:49:00.884641 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:01.425131 master-0 kubenswrapper[4038]: I0312 20:49:01.424773 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:01.425131 master-0 kubenswrapper[4038]: E0312 20:49:01.424992 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:49:01.425131 master-0 kubenswrapper[4038]: E0312 20:49:01.425013 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:49:01.425131 master-0 kubenswrapper[4038]: E0312 20:49:01.425026 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:01.425131 master-0 kubenswrapper[4038]: E0312 20:49:01.425090 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:05.425073275 +0000 UTC m=+83.460755158 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:01.615004 master-0 kubenswrapper[4038]: I0312 20:49:01.614926 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-48hk7"] Mar 12 20:49:01.615543 master-0 kubenswrapper[4038]: I0312 20:49:01.615506 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.618423 master-0 kubenswrapper[4038]: I0312 20:49:01.618378 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 20:49:01.619387 master-0 kubenswrapper[4038]: I0312 20:49:01.619345 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 20:49:01.619597 master-0 kubenswrapper[4038]: I0312 20:49:01.619563 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 20:49:01.619983 master-0 kubenswrapper[4038]: I0312 20:49:01.619937 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 20:49:01.620589 master-0 kubenswrapper[4038]: I0312 20:49:01.620540 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 20:49:01.727345 master-0 kubenswrapper[4038]: I0312 20:49:01.727228 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.727345 master-0 kubenswrapper[4038]: I0312 20:49:01.727281 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.727345 master-0 kubenswrapper[4038]: I0312 20:49:01.727332 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.727575 master-0 kubenswrapper[4038]: I0312 20:49:01.727356 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.828937 master-0 kubenswrapper[4038]: I0312 20:49:01.828774 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.828937 master-0 kubenswrapper[4038]: I0312 20:49:01.828934 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.829430 master-0 kubenswrapper[4038]: I0312 20:49:01.829007 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.829430 master-0 kubenswrapper[4038]: I0312 20:49:01.829065 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.852861 master-0 kubenswrapper[4038]: I0312 20:49:01.850133 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.865859 master-0 kubenswrapper[4038]: I0312 20:49:01.857587 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:01.865859 master-0 kubenswrapper[4038]: I0312 20:49:01.859391 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:02.331905 master-0 kubenswrapper[4038]: I0312 20:49:02.331835 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:02.533612 master-0 kubenswrapper[4038]: I0312 20:49:02.533428 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:02.879192 master-0 kubenswrapper[4038]: I0312 20:49:02.879111 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:02.879437 master-0 kubenswrapper[4038]: I0312 20:49:02.879183 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:02.880057 master-0 kubenswrapper[4038]: E0312 20:49:02.879974 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:02.880702 master-0 kubenswrapper[4038]: E0312 20:49:02.880436 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:03.160532 master-0 kubenswrapper[4038]: W0312 20:49:03.160470 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod426efd5c_69e1_43e5_835a_6e1c4ef85720.slice/crio-40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69 WatchSource:0}: Error finding container 40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69: Status 404 returned error can't find the container with id 40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69 Mar 12 20:49:03.198912 master-0 kubenswrapper[4038]: W0312 20:49:03.198641 4038 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 20:49:03.200975 master-0 kubenswrapper[4038]: I0312 20:49:03.200893 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 20:49:03.239515 master-0 kubenswrapper[4038]: I0312 20:49:03.239457 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerStarted","Data":"40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69"} Mar 12 20:49:04.246364 master-0 kubenswrapper[4038]: I0312 20:49:04.245955 4038 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="ba582835d70280ab686cd92c06c36d3f8c1b51d4a50b6f6d872889ebb52af604" exitCode=0 Mar 12 20:49:04.246364 master-0 kubenswrapper[4038]: I0312 20:49:04.246130 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerDied","Data":"ba582835d70280ab686cd92c06c36d3f8c1b51d4a50b6f6d872889ebb52af604"} Mar 12 20:49:04.281257 master-0 kubenswrapper[4038]: I0312 20:49:04.281177 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=1.281153437 podStartE2EDuration="1.281153437s" podCreationTimestamp="2026-03-12 20:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:04.261694383 +0000 UTC m=+82.297376246" watchObservedRunningTime="2026-03-12 20:49:04.281153437 +0000 UTC m=+82.316835300" Mar 12 20:49:04.879400 master-0 kubenswrapper[4038]: I0312 20:49:04.879278 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:04.879602 master-0 kubenswrapper[4038]: E0312 20:49:04.879408 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:04.880061 master-0 kubenswrapper[4038]: I0312 20:49:04.879983 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:04.880258 master-0 kubenswrapper[4038]: E0312 20:49:04.880216 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:05.455480 master-0 kubenswrapper[4038]: I0312 20:49:05.455423 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:05.456070 master-0 kubenswrapper[4038]: E0312 20:49:05.455747 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:49:05.456070 master-0 kubenswrapper[4038]: E0312 20:49:05.455838 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:49:05.456070 master-0 kubenswrapper[4038]: E0312 20:49:05.455859 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:05.456070 master-0 kubenswrapper[4038]: E0312 20:49:05.455955 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:13.455930186 +0000 UTC m=+91.491612239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:06.879509 master-0 kubenswrapper[4038]: I0312 20:49:06.879473 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:06.880124 master-0 kubenswrapper[4038]: I0312 20:49:06.879535 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:06.880124 master-0 kubenswrapper[4038]: E0312 20:49:06.879597 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:06.880124 master-0 kubenswrapper[4038]: E0312 20:49:06.879663 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:08.881003 master-0 kubenswrapper[4038]: I0312 20:49:08.880942 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:08.881607 master-0 kubenswrapper[4038]: I0312 20:49:08.881009 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:08.881607 master-0 kubenswrapper[4038]: E0312 20:49:08.881084 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:08.881607 master-0 kubenswrapper[4038]: E0312 20:49:08.881155 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:10.880425 master-0 kubenswrapper[4038]: I0312 20:49:10.880007 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:10.880425 master-0 kubenswrapper[4038]: I0312 20:49:10.880086 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:10.880425 master-0 kubenswrapper[4038]: E0312 20:49:10.880141 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:10.880425 master-0 kubenswrapper[4038]: E0312 20:49:10.880329 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:12.879856 master-0 kubenswrapper[4038]: I0312 20:49:12.879786 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:12.879856 master-0 kubenswrapper[4038]: I0312 20:49:12.879828 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:12.880662 master-0 kubenswrapper[4038]: E0312 20:49:12.880599 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:12.880824 master-0 kubenswrapper[4038]: E0312 20:49:12.880751 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:13.522088 master-0 kubenswrapper[4038]: I0312 20:49:13.522015 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:13.522480 master-0 kubenswrapper[4038]: E0312 20:49:13.522283 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:49:13.522480 master-0 kubenswrapper[4038]: E0312 20:49:13.522341 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:49:13.522480 master-0 kubenswrapper[4038]: E0312 20:49:13.522360 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:13.522480 master-0 kubenswrapper[4038]: E0312 20:49:13.522445 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:29.522421797 +0000 UTC m=+107.558103670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:14.281827 master-0 kubenswrapper[4038]: I0312 20:49:14.281718 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerStarted","Data":"36186e847a1c7ad015db1d456eab6f7fe52723f5ba9629a902598f1f75fcfbe7"} Mar 12 20:49:14.734686 master-0 kubenswrapper[4038]: I0312 20:49:14.734545 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:14.734890 master-0 kubenswrapper[4038]: E0312 20:49:14.734776 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:49:14.734890 master-0 kubenswrapper[4038]: E0312 20:49:14.734877 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:49:46.734852802 +0000 UTC m=+124.770534705 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 20:49:14.881055 master-0 kubenswrapper[4038]: I0312 20:49:14.879025 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:14.881055 master-0 kubenswrapper[4038]: I0312 20:49:14.879102 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:14.881055 master-0 kubenswrapper[4038]: E0312 20:49:14.879179 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:14.881055 master-0 kubenswrapper[4038]: E0312 20:49:14.879435 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:15.292724 master-0 kubenswrapper[4038]: I0312 20:49:15.292606 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" exitCode=0 Mar 12 20:49:15.294234 master-0 kubenswrapper[4038]: I0312 20:49:15.292754 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} Mar 12 20:49:15.300932 master-0 kubenswrapper[4038]: I0312 20:49:15.300004 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerStarted","Data":"28c691afcb8a45cb348e1216142781244b93a45eaf7cbab2716a18bf342b0dc8"} Mar 12 20:49:15.300932 master-0 kubenswrapper[4038]: I0312 20:49:15.300079 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerStarted","Data":"43218fa1071d1a1eefbd9551ef5fb65042fae200cbd64eb4c8af31a81eddb011"} Mar 12 20:49:15.307078 master-0 kubenswrapper[4038]: I0312 20:49:15.307011 4038 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="583c873e3d835c6e05c94172cd7043791e47625e0cc941a8a498c15d7dcde4e3" exitCode=0 Mar 12 20:49:15.307982 master-0 kubenswrapper[4038]: I0312 20:49:15.307923 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerDied","Data":"583c873e3d835c6e05c94172cd7043791e47625e0cc941a8a498c15d7dcde4e3"} Mar 12 20:49:15.335302 master-0 kubenswrapper[4038]: I0312 20:49:15.335197 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" podStartSLOduration=3.119366871 podStartE2EDuration="21.335159788s" podCreationTimestamp="2026-03-12 20:48:54 +0000 UTC" firstStartedPulling="2026-03-12 20:48:55.894277301 +0000 UTC m=+73.929959204" lastFinishedPulling="2026-03-12 20:49:14.110070248 +0000 UTC m=+92.145752121" observedRunningTime="2026-03-12 20:49:14.297768395 +0000 UTC m=+92.333450278" watchObservedRunningTime="2026-03-12 20:49:15.335159788 +0000 UTC m=+93.370841731" Mar 12 20:49:15.383956 master-0 kubenswrapper[4038]: I0312 20:49:15.383854 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-48hk7" podStartSLOduration=4.314511876 podStartE2EDuration="15.383788347s" podCreationTimestamp="2026-03-12 20:49:00 +0000 UTC" firstStartedPulling="2026-03-12 20:49:03.163277562 +0000 UTC m=+81.198959435" lastFinishedPulling="2026-03-12 20:49:14.232554013 +0000 UTC m=+92.268235906" observedRunningTime="2026-03-12 20:49:15.380966547 +0000 UTC m=+93.416648420" watchObservedRunningTime="2026-03-12 20:49:15.383788347 +0000 UTC m=+93.419470250" Mar 12 20:49:15.891680 master-0 kubenswrapper[4038]: I0312 20:49:15.891250 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 20:49:16.316684 master-0 kubenswrapper[4038]: I0312 20:49:16.316287 4038 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="dff388636097d32c6363bd0b2483f1d9c5210a858615e76eaa57853e4405a2b0" exitCode=0 Mar 12 20:49:16.317573 master-0 kubenswrapper[4038]: I0312 20:49:16.316417 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerDied","Data":"dff388636097d32c6363bd0b2483f1d9c5210a858615e76eaa57853e4405a2b0"} Mar 12 20:49:16.325863 master-0 kubenswrapper[4038]: I0312 20:49:16.324707 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} Mar 12 20:49:16.325863 master-0 kubenswrapper[4038]: I0312 20:49:16.324786 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} Mar 12 20:49:16.325863 master-0 kubenswrapper[4038]: I0312 20:49:16.324854 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} Mar 12 20:49:16.325863 master-0 kubenswrapper[4038]: I0312 20:49:16.324884 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} Mar 12 20:49:16.325863 master-0 kubenswrapper[4038]: I0312 20:49:16.324909 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} Mar 12 20:49:16.325863 master-0 kubenswrapper[4038]: I0312 20:49:16.324936 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} Mar 12 20:49:16.336881 master-0 kubenswrapper[4038]: I0312 20:49:16.336625 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=1.336590857 podStartE2EDuration="1.336590857s" podCreationTimestamp="2026-03-12 20:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:16.334124195 +0000 UTC m=+94.369806108" watchObservedRunningTime="2026-03-12 20:49:16.336590857 +0000 UTC m=+94.372272760" Mar 12 20:49:16.880103 master-0 kubenswrapper[4038]: I0312 20:49:16.879909 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:16.880103 master-0 kubenswrapper[4038]: I0312 20:49:16.880072 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:16.880477 master-0 kubenswrapper[4038]: E0312 20:49:16.880135 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:16.880477 master-0 kubenswrapper[4038]: E0312 20:49:16.880347 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:17.334274 master-0 kubenswrapper[4038]: I0312 20:49:17.334180 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-trlxw" event={"ID":"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8","Type":"ContainerStarted","Data":"f0e36278a700ebeae122129192b56b3c4d74e59816983dc54ba4b89ed2e40aa5"} Mar 12 20:49:18.353574 master-0 kubenswrapper[4038]: I0312 20:49:18.353492 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} Mar 12 20:49:18.880055 master-0 kubenswrapper[4038]: I0312 20:49:18.879966 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:18.880353 master-0 kubenswrapper[4038]: I0312 20:49:18.880149 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:18.880353 master-0 kubenswrapper[4038]: E0312 20:49:18.880331 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:18.880594 master-0 kubenswrapper[4038]: E0312 20:49:18.880513 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:20.112411 master-0 kubenswrapper[4038]: I0312 20:49:20.112257 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-trlxw" podStartSLOduration=6.504164876 podStartE2EDuration="38.112228592s" podCreationTimestamp="2026-03-12 20:48:42 +0000 UTC" firstStartedPulling="2026-03-12 20:48:42.469703709 +0000 UTC m=+60.505385602" lastFinishedPulling="2026-03-12 20:49:14.077767445 +0000 UTC m=+92.113449318" observedRunningTime="2026-03-12 20:49:17.362659229 +0000 UTC m=+95.398341092" watchObservedRunningTime="2026-03-12 20:49:20.112228592 +0000 UTC m=+98.147910505" Mar 12 20:49:20.113867 master-0 kubenswrapper[4038]: I0312 20:49:20.113766 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 20:49:20.217096 master-0 kubenswrapper[4038]: I0312 20:49:20.216992 4038 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr664"] Mar 12 20:49:20.879961 master-0 kubenswrapper[4038]: I0312 20:49:20.879641 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:20.879961 master-0 kubenswrapper[4038]: I0312 20:49:20.879649 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:20.879961 master-0 kubenswrapper[4038]: E0312 20:49:20.879899 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:20.880444 master-0 kubenswrapper[4038]: E0312 20:49:20.879977 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:21.373993 master-0 kubenswrapper[4038]: I0312 20:49:21.373903 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerStarted","Data":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374246 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="sbdb" containerID="cri-o://ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374245 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-controller" containerID="cri-o://86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374279 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="nbdb" containerID="cri-o://2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374387 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-node" containerID="cri-o://a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374397 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="northd" containerID="cri-o://acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374494 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-acl-logging" containerID="cri-o://1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374520 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374599 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374616 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" gracePeriod=30 Mar 12 20:49:21.375289 master-0 kubenswrapper[4038]: I0312 20:49:21.374662 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:49:21.395046 master-0 kubenswrapper[4038]: E0312 20:49:21.394762 4038 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 20:49:21.403365 master-0 kubenswrapper[4038]: E0312 20:49:21.397477 4038 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 20:49:21.406961 master-0 kubenswrapper[4038]: E0312 20:49:21.406691 4038 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 12 20:49:21.406961 master-0 kubenswrapper[4038]: E0312 20:49:21.406887 4038 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="sbdb" Mar 12 20:49:21.412573 master-0 kubenswrapper[4038]: I0312 20:49:21.412474 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podStartSLOduration=8.959003373 podStartE2EDuration="27.41244925s" podCreationTimestamp="2026-03-12 20:48:54 +0000 UTC" firstStartedPulling="2026-03-12 20:48:55.708694246 +0000 UTC m=+73.744376109" lastFinishedPulling="2026-03-12 20:49:14.162140123 +0000 UTC m=+92.197821986" observedRunningTime="2026-03-12 20:49:21.41204219 +0000 UTC m=+99.447724113" watchObservedRunningTime="2026-03-12 20:49:21.41244925 +0000 UTC m=+99.448131133" Mar 12 20:49:21.418217 master-0 kubenswrapper[4038]: I0312 20:49:21.418127 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:49:21.418349 master-0 kubenswrapper[4038]: I0312 20:49:21.418250 4038 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovnkube-controller" containerID="cri-o://623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" gracePeriod=30 Mar 12 20:49:21.478569 master-0 kubenswrapper[4038]: I0312 20:49:21.478340 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=2.478306887 podStartE2EDuration="2.478306887s" podCreationTimestamp="2026-03-12 20:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:21.43618271 +0000 UTC m=+99.471864643" watchObservedRunningTime="2026-03-12 20:49:21.478306887 +0000 UTC m=+99.513988800" Mar 12 20:49:22.175654 master-0 kubenswrapper[4038]: I0312 20:49:22.175293 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/ovnkube-controller/0.log" Mar 12 20:49:22.177630 master-0 kubenswrapper[4038]: I0312 20:49:22.177599 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/kube-rbac-proxy-ovn-metrics/0.log" Mar 12 20:49:22.178497 master-0 kubenswrapper[4038]: I0312 20:49:22.178465 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/kube-rbac-proxy-node/0.log" Mar 12 20:49:22.179392 master-0 kubenswrapper[4038]: I0312 20:49:22.179331 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/ovn-acl-logging/0.log" Mar 12 20:49:22.180384 master-0 kubenswrapper[4038]: I0312 20:49:22.180344 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/ovn-controller/0.log" Mar 12 20:49:22.180928 master-0 kubenswrapper[4038]: I0312 20:49:22.180901 4038 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:49:22.235025 master-0 kubenswrapper[4038]: I0312 20:49:22.234936 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nhrpd"] Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235092 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-node" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235114 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-node" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235135 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="northd" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235151 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="northd" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235170 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-controller" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235187 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-controller" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235206 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kubecfg-setup" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235223 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kubecfg-setup" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235240 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovnkube-controller" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235254 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovnkube-controller" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235272 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-acl-logging" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235287 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-acl-logging" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235306 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="nbdb" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235321 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="nbdb" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235335 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235347 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: E0312 20:49:22.235360 4038 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="sbdb" Mar 12 20:49:22.235374 master-0 kubenswrapper[4038]: I0312 20:49:22.235371 4038 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="sbdb" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235436 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovnkube-controller" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235455 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-acl-logging" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235470 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235483 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="ovn-controller" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235496 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="sbdb" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235508 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="kube-rbac-proxy-node" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235519 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="northd" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.235534 4038 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" containerName="nbdb" Mar 12 20:49:22.236706 master-0 kubenswrapper[4038]: I0312 20:49:22.236622 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.310342 master-0 kubenswrapper[4038]: I0312 20:49:22.310200 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-script-lib\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.310342 master-0 kubenswrapper[4038]: I0312 20:49:22.310272 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-openvswitch\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.310342 master-0 kubenswrapper[4038]: I0312 20:49:22.310317 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9zx\" (UniqueName: \"kubernetes.io/projected/6e737121-cc77-4d22-a628-c4b4406b4698-kube-api-access-zq9zx\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.310583 master-0 kubenswrapper[4038]: I0312 20:49:22.310354 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-ovn\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.310583 master-0 kubenswrapper[4038]: I0312 20:49:22.310435 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-netd\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.310583 master-0 kubenswrapper[4038]: I0312 20:49:22.310468 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.310671 master-0 kubenswrapper[4038]: I0312 20:49:22.310578 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.310705 master-0 kubenswrapper[4038]: I0312 20:49:22.310679 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e737121-cc77-4d22-a628-c4b4406b4698-ovn-node-metrics-cert\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.310769 master-0 kubenswrapper[4038]: I0312 20:49:22.310709 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.311530 master-0 kubenswrapper[4038]: I0312 20:49:22.311490 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-ovn-kubernetes\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.311660 master-0 kubenswrapper[4038]: I0312 20:49:22.311621 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.311707 master-0 kubenswrapper[4038]: I0312 20:49:22.311635 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-config\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.311746 master-0 kubenswrapper[4038]: I0312 20:49:22.311731 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-kubelet\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.311834 master-0 kubenswrapper[4038]: I0312 20:49:22.311777 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.311882 master-0 kubenswrapper[4038]: I0312 20:49:22.311841 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.311882 master-0 kubenswrapper[4038]: I0312 20:49:22.311846 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-node-log\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.311959 master-0 kubenswrapper[4038]: I0312 20:49:22.311882 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-node-log" (OuterVolumeSpecName: "node-log") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.311959 master-0 kubenswrapper[4038]: I0312 20:49:22.311924 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-netns\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.311959 master-0 kubenswrapper[4038]: I0312 20:49:22.311956 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-systemd\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312055 master-0 kubenswrapper[4038]: I0312 20:49:22.311941 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:49:22.312055 master-0 kubenswrapper[4038]: I0312 20:49:22.311980 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-log-socket\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312055 master-0 kubenswrapper[4038]: I0312 20:49:22.312000 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-bin\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312055 master-0 kubenswrapper[4038]: I0312 20:49:22.312002 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312055 master-0 kubenswrapper[4038]: I0312 20:49:22.312021 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-etc-openvswitch\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312055 master-0 kubenswrapper[4038]: I0312 20:49:22.312050 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-slash\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.311934 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312056 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312098 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-slash" (OuterVolumeSpecName: "host-slash") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312059 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312081 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-log-socket" (OuterVolumeSpecName: "log-socket") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312069 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-systemd-units\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312160 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312245 master-0 kubenswrapper[4038]: I0312 20:49:22.312227 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-var-lib-openvswitch\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312506 master-0 kubenswrapper[4038]: I0312 20:49:22.312263 4038 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-env-overrides\") pod \"6e737121-cc77-4d22-a628-c4b4406b4698\" (UID: \"6e737121-cc77-4d22-a628-c4b4406b4698\") " Mar 12 20:49:22.312506 master-0 kubenswrapper[4038]: I0312 20:49:22.312263 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.312506 master-0 kubenswrapper[4038]: I0312 20:49:22.312452 4038 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.312506 master-0 kubenswrapper[4038]: I0312 20:49:22.312473 4038 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.312506 master-0 kubenswrapper[4038]: I0312 20:49:22.312491 4038 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.312506 master-0 kubenswrapper[4038]: I0312 20:49:22.312508 4038 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.312695 master-0 kubenswrapper[4038]: I0312 20:49:22.312525 4038 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.312695 master-0 kubenswrapper[4038]: I0312 20:49:22.312533 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:49:22.312695 master-0 kubenswrapper[4038]: I0312 20:49:22.312543 4038 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.312885 master-0 kubenswrapper[4038]: I0312 20:49:22.312845 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:49:22.312981 master-0 kubenswrapper[4038]: I0312 20:49:22.312946 4038 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313026 master-0 kubenswrapper[4038]: I0312 20:49:22.312991 4038 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313026 master-0 kubenswrapper[4038]: I0312 20:49:22.313017 4038 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313094 master-0 kubenswrapper[4038]: I0312 20:49:22.313037 4038 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313094 master-0 kubenswrapper[4038]: I0312 20:49:22.313055 4038 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313094 master-0 kubenswrapper[4038]: I0312 20:49:22.313073 4038 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313094 master-0 kubenswrapper[4038]: I0312 20:49:22.313091 4038 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313211 master-0 kubenswrapper[4038]: I0312 20:49:22.313109 4038 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-node-log\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.313211 master-0 kubenswrapper[4038]: I0312 20:49:22.313129 4038 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.316211 master-0 kubenswrapper[4038]: I0312 20:49:22.316155 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e737121-cc77-4d22-a628-c4b4406b4698-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:49:22.316612 master-0 kubenswrapper[4038]: I0312 20:49:22.316560 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e737121-cc77-4d22-a628-c4b4406b4698-kube-api-access-zq9zx" (OuterVolumeSpecName: "kube-api-access-zq9zx") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "kube-api-access-zq9zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:49:22.318246 master-0 kubenswrapper[4038]: I0312 20:49:22.318205 4038 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6e737121-cc77-4d22-a628-c4b4406b4698" (UID: "6e737121-cc77-4d22-a628-c4b4406b4698"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:49:22.381151 master-0 kubenswrapper[4038]: I0312 20:49:22.381089 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/ovnkube-controller/0.log" Mar 12 20:49:22.385081 master-0 kubenswrapper[4038]: I0312 20:49:22.383876 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/kube-rbac-proxy-ovn-metrics/0.log" Mar 12 20:49:22.385081 master-0 kubenswrapper[4038]: I0312 20:49:22.384632 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/kube-rbac-proxy-node/0.log" Mar 12 20:49:22.385351 master-0 kubenswrapper[4038]: I0312 20:49:22.385317 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/ovn-acl-logging/0.log" Mar 12 20:49:22.385889 master-0 kubenswrapper[4038]: I0312 20:49:22.385861 4038 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wr664_6e737121-cc77-4d22-a628-c4b4406b4698/ovn-controller/0.log" Mar 12 20:49:22.386710 master-0 kubenswrapper[4038]: I0312 20:49:22.386658 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" exitCode=1 Mar 12 20:49:22.386710 master-0 kubenswrapper[4038]: I0312 20:49:22.386705 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" exitCode=0 Mar 12 20:49:22.386783 master-0 kubenswrapper[4038]: I0312 20:49:22.386724 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" exitCode=0 Mar 12 20:49:22.386783 master-0 kubenswrapper[4038]: I0312 20:49:22.386744 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" exitCode=0 Mar 12 20:49:22.386783 master-0 kubenswrapper[4038]: I0312 20:49:22.386759 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" exitCode=143 Mar 12 20:49:22.386783 master-0 kubenswrapper[4038]: I0312 20:49:22.386772 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" exitCode=143 Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386786 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" exitCode=143 Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386799 4038 generic.go:334] "Generic (PLEG): container finished" podID="6e737121-cc77-4d22-a628-c4b4406b4698" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" exitCode=143 Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386828 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386880 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386898 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386909 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386921 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386931 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} Mar 12 20:49:22.386948 master-0 kubenswrapper[4038]: I0312 20:49:22.386775 4038 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" Mar 12 20:49:22.387244 master-0 kubenswrapper[4038]: I0312 20:49:22.386943 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} Mar 12 20:49:22.387244 master-0 kubenswrapper[4038]: I0312 20:49:22.387078 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} Mar 12 20:49:22.387244 master-0 kubenswrapper[4038]: I0312 20:49:22.387087 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} Mar 12 20:49:22.387244 master-0 kubenswrapper[4038]: I0312 20:49:22.387088 4038 scope.go:117] "RemoveContainer" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.387244 master-0 kubenswrapper[4038]: I0312 20:49:22.387097 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} Mar 12 20:49:22.388284 master-0 kubenswrapper[4038]: I0312 20:49:22.388171 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} Mar 12 20:49:22.388284 master-0 kubenswrapper[4038]: I0312 20:49:22.388276 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} Mar 12 20:49:22.388369 master-0 kubenswrapper[4038]: I0312 20:49:22.388292 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} Mar 12 20:49:22.388369 master-0 kubenswrapper[4038]: I0312 20:49:22.388303 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} Mar 12 20:49:22.388369 master-0 kubenswrapper[4038]: I0312 20:49:22.388364 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} Mar 12 20:49:22.388457 master-0 kubenswrapper[4038]: I0312 20:49:22.388376 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} Mar 12 20:49:22.388457 master-0 kubenswrapper[4038]: I0312 20:49:22.388391 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} Mar 12 20:49:22.388527 master-0 kubenswrapper[4038]: I0312 20:49:22.388454 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} Mar 12 20:49:22.388527 master-0 kubenswrapper[4038]: I0312 20:49:22.388472 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} Mar 12 20:49:22.388577 master-0 kubenswrapper[4038]: I0312 20:49:22.388550 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} Mar 12 20:49:22.388611 master-0 kubenswrapper[4038]: I0312 20:49:22.388584 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} Mar 12 20:49:22.388686 master-0 kubenswrapper[4038]: I0312 20:49:22.388650 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} Mar 12 20:49:22.388686 master-0 kubenswrapper[4038]: I0312 20:49:22.388673 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} Mar 12 20:49:22.388686 master-0 kubenswrapper[4038]: I0312 20:49:22.388684 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} Mar 12 20:49:22.388778 master-0 kubenswrapper[4038]: I0312 20:49:22.388743 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} Mar 12 20:49:22.388778 master-0 kubenswrapper[4038]: I0312 20:49:22.388757 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} Mar 12 20:49:22.388778 master-0 kubenswrapper[4038]: I0312 20:49:22.388768 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} Mar 12 20:49:22.388867 master-0 kubenswrapper[4038]: I0312 20:49:22.388782 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} Mar 12 20:49:22.388905 master-0 kubenswrapper[4038]: I0312 20:49:22.388871 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} Mar 12 20:49:22.388905 master-0 kubenswrapper[4038]: I0312 20:49:22.388892 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wr664" event={"ID":"6e737121-cc77-4d22-a628-c4b4406b4698","Type":"ContainerDied","Data":"bbde71f4d6a08e6432aff49678942efe1e239e2a38fc8d45e30b413ea5aea68e"} Mar 12 20:49:22.389005 master-0 kubenswrapper[4038]: I0312 20:49:22.388968 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} Mar 12 20:49:22.389005 master-0 kubenswrapper[4038]: I0312 20:49:22.388995 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} Mar 12 20:49:22.389064 master-0 kubenswrapper[4038]: I0312 20:49:22.389008 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} Mar 12 20:49:22.389064 master-0 kubenswrapper[4038]: I0312 20:49:22.389019 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} Mar 12 20:49:22.389119 master-0 kubenswrapper[4038]: I0312 20:49:22.389070 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} Mar 12 20:49:22.389119 master-0 kubenswrapper[4038]: I0312 20:49:22.389081 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} Mar 12 20:49:22.389218 master-0 kubenswrapper[4038]: I0312 20:49:22.389093 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} Mar 12 20:49:22.389999 master-0 kubenswrapper[4038]: I0312 20:49:22.389911 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} Mar 12 20:49:22.389999 master-0 kubenswrapper[4038]: I0312 20:49:22.389993 4038 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} Mar 12 20:49:22.410361 master-0 kubenswrapper[4038]: I0312 20:49:22.410301 4038 scope.go:117] "RemoveContainer" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414354 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414417 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414503 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414574 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414618 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414662 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414703 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414787 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414846 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414894 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414956 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.414985 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.415019 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.415082 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.415109 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.415692 master-0 kubenswrapper[4038]: I0312 20:49:22.415140 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415177 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415212 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415302 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415349 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415569 4038 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415595 4038 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq9zx\" (UniqueName: \"kubernetes.io/projected/6e737121-cc77-4d22-a628-c4b4406b4698-kube-api-access-zq9zx\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415686 4038 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e737121-cc77-4d22-a628-c4b4406b4698-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415709 4038 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e737121-cc77-4d22-a628-c4b4406b4698-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.416367 master-0 kubenswrapper[4038]: I0312 20:49:22.415729 4038 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e737121-cc77-4d22-a628-c4b4406b4698-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:22.428965 master-0 kubenswrapper[4038]: I0312 20:49:22.428889 4038 scope.go:117] "RemoveContainer" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.444010 master-0 kubenswrapper[4038]: I0312 20:49:22.443952 4038 scope.go:117] "RemoveContainer" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.449255 master-0 kubenswrapper[4038]: I0312 20:49:22.449156 4038 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr664"] Mar 12 20:49:22.453399 master-0 kubenswrapper[4038]: I0312 20:49:22.453038 4038 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wr664"] Mar 12 20:49:22.466065 master-0 kubenswrapper[4038]: I0312 20:49:22.466009 4038 scope.go:117] "RemoveContainer" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.479427 master-0 kubenswrapper[4038]: I0312 20:49:22.479375 4038 scope.go:117] "RemoveContainer" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.493189 master-0 kubenswrapper[4038]: I0312 20:49:22.493116 4038 scope.go:117] "RemoveContainer" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" Mar 12 20:49:22.506340 master-0 kubenswrapper[4038]: I0312 20:49:22.506287 4038 scope.go:117] "RemoveContainer" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" Mar 12 20:49:22.516458 master-0 kubenswrapper[4038]: I0312 20:49:22.516393 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516458 master-0 kubenswrapper[4038]: I0312 20:49:22.516454 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516575 master-0 kubenswrapper[4038]: I0312 20:49:22.516489 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516683 master-0 kubenswrapper[4038]: I0312 20:49:22.516632 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516726 master-0 kubenswrapper[4038]: I0312 20:49:22.516707 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516726 master-0 kubenswrapper[4038]: I0312 20:49:22.516715 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516889 master-0 kubenswrapper[4038]: I0312 20:49:22.516746 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516889 master-0 kubenswrapper[4038]: I0312 20:49:22.516780 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.516889 master-0 kubenswrapper[4038]: I0312 20:49:22.516833 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517160 master-0 kubenswrapper[4038]: I0312 20:49:22.517077 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517216 master-0 kubenswrapper[4038]: I0312 20:49:22.517091 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517256 master-0 kubenswrapper[4038]: I0312 20:49:22.517202 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517293 master-0 kubenswrapper[4038]: I0312 20:49:22.517271 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517373 master-0 kubenswrapper[4038]: I0312 20:49:22.517332 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517498 master-0 kubenswrapper[4038]: I0312 20:49:22.517468 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517604 master-0 kubenswrapper[4038]: I0312 20:49:22.517570 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517656 master-0 kubenswrapper[4038]: I0312 20:49:22.517587 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517728 master-0 kubenswrapper[4038]: I0312 20:49:22.517687 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517770 master-0 kubenswrapper[4038]: I0312 20:49:22.517585 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517826 master-0 kubenswrapper[4038]: I0312 20:49:22.517766 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517884 master-0 kubenswrapper[4038]: I0312 20:49:22.517800 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517884 master-0 kubenswrapper[4038]: I0312 20:49:22.517867 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517959 master-0 kubenswrapper[4038]: I0312 20:49:22.517895 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.517959 master-0 kubenswrapper[4038]: I0312 20:49:22.517937 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518037 master-0 kubenswrapper[4038]: I0312 20:49:22.517969 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518037 master-0 kubenswrapper[4038]: I0312 20:49:22.518001 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518109 master-0 kubenswrapper[4038]: I0312 20:49:22.518040 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518109 master-0 kubenswrapper[4038]: I0312 20:49:22.518069 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518173 master-0 kubenswrapper[4038]: I0312 20:49:22.518099 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518311 master-0 kubenswrapper[4038]: I0312 20:49:22.518288 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518428 master-0 kubenswrapper[4038]: I0312 20:49:22.518388 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518479 master-0 kubenswrapper[4038]: I0312 20:49:22.518315 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518479 master-0 kubenswrapper[4038]: I0312 20:49:22.518423 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518479 master-0 kubenswrapper[4038]: I0312 20:49:22.518462 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518586 master-0 kubenswrapper[4038]: I0312 20:49:22.518523 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.518701 master-0 kubenswrapper[4038]: I0312 20:49:22.518665 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.519348 master-0 kubenswrapper[4038]: I0312 20:49:22.519305 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.519500 master-0 kubenswrapper[4038]: I0312 20:49:22.519482 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.524782 master-0 kubenswrapper[4038]: I0312 20:49:22.524758 4038 scope.go:117] "RemoveContainer" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" Mar 12 20:49:22.525916 master-0 kubenswrapper[4038]: I0312 20:49:22.525867 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.536670 master-0 kubenswrapper[4038]: I0312 20:49:22.536648 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.541647 master-0 kubenswrapper[4038]: I0312 20:49:22.541630 4038 scope.go:117] "RemoveContainer" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.542345 master-0 kubenswrapper[4038]: E0312 20:49:22.542305 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": container with ID starting with 623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf not found: ID does not exist" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.542446 master-0 kubenswrapper[4038]: I0312 20:49:22.542421 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} err="failed to get container status \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": rpc error: code = NotFound desc = could not find container \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": container with ID starting with 623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf not found: ID does not exist" Mar 12 20:49:22.542503 master-0 kubenswrapper[4038]: I0312 20:49:22.542494 4038 scope.go:117] "RemoveContainer" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.543079 master-0 kubenswrapper[4038]: E0312 20:49:22.543005 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": container with ID starting with ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a not found: ID does not exist" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.543145 master-0 kubenswrapper[4038]: I0312 20:49:22.543099 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} err="failed to get container status \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": rpc error: code = NotFound desc = could not find container \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": container with ID starting with ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a not found: ID does not exist" Mar 12 20:49:22.543189 master-0 kubenswrapper[4038]: I0312 20:49:22.543157 4038 scope.go:117] "RemoveContainer" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.543687 master-0 kubenswrapper[4038]: E0312 20:49:22.543633 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": container with ID starting with 2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356 not found: ID does not exist" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.543745 master-0 kubenswrapper[4038]: I0312 20:49:22.543705 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} err="failed to get container status \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": rpc error: code = NotFound desc = could not find container \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": container with ID starting with 2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356 not found: ID does not exist" Mar 12 20:49:22.543778 master-0 kubenswrapper[4038]: I0312 20:49:22.543753 4038 scope.go:117] "RemoveContainer" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.544232 master-0 kubenswrapper[4038]: E0312 20:49:22.544210 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": container with ID starting with acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354 not found: ID does not exist" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.544327 master-0 kubenswrapper[4038]: I0312 20:49:22.544309 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} err="failed to get container status \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": rpc error: code = NotFound desc = could not find container \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": container with ID starting with acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354 not found: ID does not exist" Mar 12 20:49:22.544383 master-0 kubenswrapper[4038]: I0312 20:49:22.544373 4038 scope.go:117] "RemoveContainer" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.544756 master-0 kubenswrapper[4038]: E0312 20:49:22.544741 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": container with ID starting with 9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad not found: ID does not exist" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.544879 master-0 kubenswrapper[4038]: I0312 20:49:22.544860 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} err="failed to get container status \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": rpc error: code = NotFound desc = could not find container \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": container with ID starting with 9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad not found: ID does not exist" Mar 12 20:49:22.544941 master-0 kubenswrapper[4038]: I0312 20:49:22.544931 4038 scope.go:117] "RemoveContainer" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.545368 master-0 kubenswrapper[4038]: E0312 20:49:22.545351 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": container with ID starting with a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac not found: ID does not exist" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.545443 master-0 kubenswrapper[4038]: I0312 20:49:22.545425 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} err="failed to get container status \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": rpc error: code = NotFound desc = could not find container \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": container with ID starting with a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac not found: ID does not exist" Mar 12 20:49:22.545497 master-0 kubenswrapper[4038]: I0312 20:49:22.545486 4038 scope.go:117] "RemoveContainer" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" Mar 12 20:49:22.545992 master-0 kubenswrapper[4038]: E0312 20:49:22.545945 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": container with ID starting with 1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b not found: ID does not exist" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" Mar 12 20:49:22.546045 master-0 kubenswrapper[4038]: I0312 20:49:22.545994 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} err="failed to get container status \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": rpc error: code = NotFound desc = could not find container \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": container with ID starting with 1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b not found: ID does not exist" Mar 12 20:49:22.546045 master-0 kubenswrapper[4038]: I0312 20:49:22.546026 4038 scope.go:117] "RemoveContainer" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" Mar 12 20:49:22.546415 master-0 kubenswrapper[4038]: E0312 20:49:22.546397 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": container with ID starting with 86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165 not found: ID does not exist" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" Mar 12 20:49:22.546511 master-0 kubenswrapper[4038]: I0312 20:49:22.546469 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} err="failed to get container status \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": rpc error: code = NotFound desc = could not find container \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": container with ID starting with 86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165 not found: ID does not exist" Mar 12 20:49:22.546571 master-0 kubenswrapper[4038]: I0312 20:49:22.546559 4038 scope.go:117] "RemoveContainer" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" Mar 12 20:49:22.546964 master-0 kubenswrapper[4038]: E0312 20:49:22.546949 4038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": container with ID starting with dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b not found: ID does not exist" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" Mar 12 20:49:22.547047 master-0 kubenswrapper[4038]: I0312 20:49:22.547032 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} err="failed to get container status \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": rpc error: code = NotFound desc = could not find container \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": container with ID starting with dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b not found: ID does not exist" Mar 12 20:49:22.547101 master-0 kubenswrapper[4038]: I0312 20:49:22.547092 4038 scope.go:117] "RemoveContainer" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.547435 master-0 kubenswrapper[4038]: I0312 20:49:22.547420 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} err="failed to get container status \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": rpc error: code = NotFound desc = could not find container \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": container with ID starting with 623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf not found: ID does not exist" Mar 12 20:49:22.547498 master-0 kubenswrapper[4038]: I0312 20:49:22.547488 4038 scope.go:117] "RemoveContainer" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.547899 master-0 kubenswrapper[4038]: I0312 20:49:22.547871 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} err="failed to get container status \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": rpc error: code = NotFound desc = could not find container \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": container with ID starting with ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a not found: ID does not exist" Mar 12 20:49:22.547899 master-0 kubenswrapper[4038]: I0312 20:49:22.547901 4038 scope.go:117] "RemoveContainer" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.548361 master-0 kubenswrapper[4038]: I0312 20:49:22.548346 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} err="failed to get container status \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": rpc error: code = NotFound desc = could not find container \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": container with ID starting with 2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356 not found: ID does not exist" Mar 12 20:49:22.548421 master-0 kubenswrapper[4038]: I0312 20:49:22.548411 4038 scope.go:117] "RemoveContainer" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.548765 master-0 kubenswrapper[4038]: I0312 20:49:22.548725 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} err="failed to get container status \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": rpc error: code = NotFound desc = could not find container \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": container with ID starting with acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354 not found: ID does not exist" Mar 12 20:49:22.548832 master-0 kubenswrapper[4038]: I0312 20:49:22.548769 4038 scope.go:117] "RemoveContainer" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.549215 master-0 kubenswrapper[4038]: I0312 20:49:22.549196 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} err="failed to get container status \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": rpc error: code = NotFound desc = could not find container \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": container with ID starting with 9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad not found: ID does not exist" Mar 12 20:49:22.549289 master-0 kubenswrapper[4038]: I0312 20:49:22.549278 4038 scope.go:117] "RemoveContainer" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.549664 master-0 kubenswrapper[4038]: I0312 20:49:22.549649 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} err="failed to get container status \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": rpc error: code = NotFound desc = could not find container \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": container with ID starting with a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac not found: ID does not exist" Mar 12 20:49:22.549725 master-0 kubenswrapper[4038]: I0312 20:49:22.549715 4038 scope.go:117] "RemoveContainer" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" Mar 12 20:49:22.550091 master-0 kubenswrapper[4038]: I0312 20:49:22.550060 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} err="failed to get container status \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": rpc error: code = NotFound desc = could not find container \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": container with ID starting with 1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b not found: ID does not exist" Mar 12 20:49:22.550153 master-0 kubenswrapper[4038]: I0312 20:49:22.550089 4038 scope.go:117] "RemoveContainer" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" Mar 12 20:49:22.550510 master-0 kubenswrapper[4038]: I0312 20:49:22.550486 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} err="failed to get container status \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": rpc error: code = NotFound desc = could not find container \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": container with ID starting with 86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165 not found: ID does not exist" Mar 12 20:49:22.550644 master-0 kubenswrapper[4038]: I0312 20:49:22.550632 4038 scope.go:117] "RemoveContainer" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" Mar 12 20:49:22.550914 master-0 kubenswrapper[4038]: I0312 20:49:22.550894 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:22.551069 master-0 kubenswrapper[4038]: I0312 20:49:22.551031 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} err="failed to get container status \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": rpc error: code = NotFound desc = could not find container \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": container with ID starting with dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b not found: ID does not exist" Mar 12 20:49:22.551118 master-0 kubenswrapper[4038]: I0312 20:49:22.551071 4038 scope.go:117] "RemoveContainer" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.551611 master-0 kubenswrapper[4038]: I0312 20:49:22.551590 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} err="failed to get container status \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": rpc error: code = NotFound desc = could not find container \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": container with ID starting with 623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf not found: ID does not exist" Mar 12 20:49:22.551669 master-0 kubenswrapper[4038]: I0312 20:49:22.551659 4038 scope.go:117] "RemoveContainer" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.552055 master-0 kubenswrapper[4038]: I0312 20:49:22.552039 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} err="failed to get container status \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": rpc error: code = NotFound desc = could not find container \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": container with ID starting with ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a not found: ID does not exist" Mar 12 20:49:22.552121 master-0 kubenswrapper[4038]: I0312 20:49:22.552111 4038 scope.go:117] "RemoveContainer" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.552507 master-0 kubenswrapper[4038]: I0312 20:49:22.552474 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} err="failed to get container status \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": rpc error: code = NotFound desc = could not find container \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": container with ID starting with 2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356 not found: ID does not exist" Mar 12 20:49:22.552558 master-0 kubenswrapper[4038]: I0312 20:49:22.552513 4038 scope.go:117] "RemoveContainer" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.552881 master-0 kubenswrapper[4038]: I0312 20:49:22.552864 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} err="failed to get container status \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": rpc error: code = NotFound desc = could not find container \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": container with ID starting with acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354 not found: ID does not exist" Mar 12 20:49:22.552945 master-0 kubenswrapper[4038]: I0312 20:49:22.552934 4038 scope.go:117] "RemoveContainer" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.553320 master-0 kubenswrapper[4038]: I0312 20:49:22.553302 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} err="failed to get container status \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": rpc error: code = NotFound desc = could not find container \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": container with ID starting with 9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad not found: ID does not exist" Mar 12 20:49:22.553384 master-0 kubenswrapper[4038]: I0312 20:49:22.553374 4038 scope.go:117] "RemoveContainer" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.553695 master-0 kubenswrapper[4038]: I0312 20:49:22.553680 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} err="failed to get container status \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": rpc error: code = NotFound desc = could not find container \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": container with ID starting with a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac not found: ID does not exist" Mar 12 20:49:22.553861 master-0 kubenswrapper[4038]: I0312 20:49:22.553848 4038 scope.go:117] "RemoveContainer" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" Mar 12 20:49:22.554207 master-0 kubenswrapper[4038]: I0312 20:49:22.554191 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} err="failed to get container status \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": rpc error: code = NotFound desc = could not find container \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": container with ID starting with 1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b not found: ID does not exist" Mar 12 20:49:22.554274 master-0 kubenswrapper[4038]: I0312 20:49:22.554264 4038 scope.go:117] "RemoveContainer" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" Mar 12 20:49:22.554572 master-0 kubenswrapper[4038]: I0312 20:49:22.554557 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} err="failed to get container status \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": rpc error: code = NotFound desc = could not find container \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": container with ID starting with 86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165 not found: ID does not exist" Mar 12 20:49:22.554633 master-0 kubenswrapper[4038]: I0312 20:49:22.554623 4038 scope.go:117] "RemoveContainer" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" Mar 12 20:49:22.555058 master-0 kubenswrapper[4038]: I0312 20:49:22.555024 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} err="failed to get container status \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": rpc error: code = NotFound desc = could not find container \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": container with ID starting with dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b not found: ID does not exist" Mar 12 20:49:22.555113 master-0 kubenswrapper[4038]: I0312 20:49:22.555061 4038 scope.go:117] "RemoveContainer" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.555467 master-0 kubenswrapper[4038]: I0312 20:49:22.555439 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} err="failed to get container status \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": rpc error: code = NotFound desc = could not find container \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": container with ID starting with 623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf not found: ID does not exist" Mar 12 20:49:22.555515 master-0 kubenswrapper[4038]: I0312 20:49:22.555465 4038 scope.go:117] "RemoveContainer" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.555753 master-0 kubenswrapper[4038]: I0312 20:49:22.555736 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} err="failed to get container status \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": rpc error: code = NotFound desc = could not find container \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": container with ID starting with ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a not found: ID does not exist" Mar 12 20:49:22.555858 master-0 kubenswrapper[4038]: I0312 20:49:22.555844 4038 scope.go:117] "RemoveContainer" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.556350 master-0 kubenswrapper[4038]: I0312 20:49:22.556334 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} err="failed to get container status \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": rpc error: code = NotFound desc = could not find container \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": container with ID starting with 2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356 not found: ID does not exist" Mar 12 20:49:22.556414 master-0 kubenswrapper[4038]: I0312 20:49:22.556404 4038 scope.go:117] "RemoveContainer" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.556749 master-0 kubenswrapper[4038]: I0312 20:49:22.556732 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} err="failed to get container status \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": rpc error: code = NotFound desc = could not find container \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": container with ID starting with acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354 not found: ID does not exist" Mar 12 20:49:22.556829 master-0 kubenswrapper[4038]: I0312 20:49:22.556802 4038 scope.go:117] "RemoveContainer" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.557341 master-0 kubenswrapper[4038]: I0312 20:49:22.557301 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} err="failed to get container status \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": rpc error: code = NotFound desc = could not find container \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": container with ID starting with 9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad not found: ID does not exist" Mar 12 20:49:22.557341 master-0 kubenswrapper[4038]: I0312 20:49:22.557330 4038 scope.go:117] "RemoveContainer" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.558374 master-0 kubenswrapper[4038]: I0312 20:49:22.558269 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} err="failed to get container status \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": rpc error: code = NotFound desc = could not find container \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": container with ID starting with a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac not found: ID does not exist" Mar 12 20:49:22.558374 master-0 kubenswrapper[4038]: I0312 20:49:22.558339 4038 scope.go:117] "RemoveContainer" containerID="1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b" Mar 12 20:49:22.558726 master-0 kubenswrapper[4038]: I0312 20:49:22.558697 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b"} err="failed to get container status \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": rpc error: code = NotFound desc = could not find container \"1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b\": container with ID starting with 1562de155ef5a2e995aecf9a9209bd714641f5a0f3a09d3f2800777f8017879b not found: ID does not exist" Mar 12 20:49:22.558726 master-0 kubenswrapper[4038]: I0312 20:49:22.558723 4038 scope.go:117] "RemoveContainer" containerID="86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165" Mar 12 20:49:22.559087 master-0 kubenswrapper[4038]: I0312 20:49:22.559064 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165"} err="failed to get container status \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": rpc error: code = NotFound desc = could not find container \"86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165\": container with ID starting with 86adcd43c30b4435a91475c9b147e965e140a3e9f4924dda26a745f67682a165 not found: ID does not exist" Mar 12 20:49:22.559146 master-0 kubenswrapper[4038]: I0312 20:49:22.559087 4038 scope.go:117] "RemoveContainer" containerID="dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b" Mar 12 20:49:22.559397 master-0 kubenswrapper[4038]: I0312 20:49:22.559381 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b"} err="failed to get container status \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": rpc error: code = NotFound desc = could not find container \"dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b\": container with ID starting with dac8b17cbe97b35d7a29d4ccb6f92047733531100a194d1da38a8f4ef85e413b not found: ID does not exist" Mar 12 20:49:22.559471 master-0 kubenswrapper[4038]: I0312 20:49:22.559459 4038 scope.go:117] "RemoveContainer" containerID="623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf" Mar 12 20:49:22.560253 master-0 kubenswrapper[4038]: I0312 20:49:22.560217 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf"} err="failed to get container status \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": rpc error: code = NotFound desc = could not find container \"623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf\": container with ID starting with 623dee5affa936815e3efce0b24d982c00c136f7a3789510c1cd235b6282edcf not found: ID does not exist" Mar 12 20:49:22.560253 master-0 kubenswrapper[4038]: I0312 20:49:22.560250 4038 scope.go:117] "RemoveContainer" containerID="ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a" Mar 12 20:49:22.560598 master-0 kubenswrapper[4038]: I0312 20:49:22.560528 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a"} err="failed to get container status \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": rpc error: code = NotFound desc = could not find container \"ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a\": container with ID starting with ba20ca44acd5bd715c4ff17bedc30813ae21cf77ca19610580ebc399af06484a not found: ID does not exist" Mar 12 20:49:22.560598 master-0 kubenswrapper[4038]: I0312 20:49:22.560557 4038 scope.go:117] "RemoveContainer" containerID="2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356" Mar 12 20:49:22.560981 master-0 kubenswrapper[4038]: I0312 20:49:22.560945 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356"} err="failed to get container status \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": rpc error: code = NotFound desc = could not find container \"2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356\": container with ID starting with 2aefa02d6f1d888fb0e1aa1a8bfccfcf114c5beabfae6fe7e2ee4c4ec67bf356 not found: ID does not exist" Mar 12 20:49:22.561031 master-0 kubenswrapper[4038]: I0312 20:49:22.560984 4038 scope.go:117] "RemoveContainer" containerID="acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354" Mar 12 20:49:22.562254 master-0 kubenswrapper[4038]: I0312 20:49:22.562228 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354"} err="failed to get container status \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": rpc error: code = NotFound desc = could not find container \"acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354\": container with ID starting with acb979e2256afe555848fc2ee4dd4e69847262838cf2b1c08de4815055c15354 not found: ID does not exist" Mar 12 20:49:22.562254 master-0 kubenswrapper[4038]: I0312 20:49:22.562251 4038 scope.go:117] "RemoveContainer" containerID="9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad" Mar 12 20:49:22.562777 master-0 kubenswrapper[4038]: I0312 20:49:22.562754 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad"} err="failed to get container status \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": rpc error: code = NotFound desc = could not find container \"9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad\": container with ID starting with 9bf4c1be6e8f5f27e1d4531600a9c494162c56f1b0672c2da1b5f56a6a6cf6ad not found: ID does not exist" Mar 12 20:49:22.562929 master-0 kubenswrapper[4038]: I0312 20:49:22.562917 4038 scope.go:117] "RemoveContainer" containerID="a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac" Mar 12 20:49:22.563847 master-0 kubenswrapper[4038]: I0312 20:49:22.563626 4038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac"} err="failed to get container status \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": rpc error: code = NotFound desc = could not find container \"a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac\": container with ID starting with a536e8daf88d265ac0ec5bbcc1a6ff6b976024a604c309d4be054db9174908ac not found: ID does not exist" Mar 12 20:49:22.571118 master-0 kubenswrapper[4038]: W0312 20:49:22.571073 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3daeefa_7842_464c_a6c9_01b44ebea477.slice/crio-bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2 WatchSource:0}: Error finding container bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2: Status 404 returned error can't find the container with id bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2 Mar 12 20:49:22.879706 master-0 kubenswrapper[4038]: I0312 20:49:22.879638 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:22.880158 master-0 kubenswrapper[4038]: I0312 20:49:22.879906 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:22.881026 master-0 kubenswrapper[4038]: E0312 20:49:22.880966 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:22.881265 master-0 kubenswrapper[4038]: E0312 20:49:22.881222 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:22.886921 master-0 kubenswrapper[4038]: I0312 20:49:22.886853 4038 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e737121-cc77-4d22-a628-c4b4406b4698" path="/var/lib/kubelet/pods/6e737121-cc77-4d22-a628-c4b4406b4698/volumes" Mar 12 20:49:23.393456 master-0 kubenswrapper[4038]: I0312 20:49:23.393396 4038 generic.go:334] "Generic (PLEG): container finished" podID="c3daeefa-7842-464c-a6c9-01b44ebea477" containerID="29a66354284f4876d7830823c349cadde817f41becb6c2b46ab19ae09fa84f0c" exitCode=0 Mar 12 20:49:23.393456 master-0 kubenswrapper[4038]: I0312 20:49:23.393458 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerDied","Data":"29a66354284f4876d7830823c349cadde817f41becb6c2b46ab19ae09fa84f0c"} Mar 12 20:49:23.395087 master-0 kubenswrapper[4038]: I0312 20:49:23.393505 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2"} Mar 12 20:49:24.401636 master-0 kubenswrapper[4038]: I0312 20:49:24.401546 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"cd013d178d984a5708be3d4912bf2acde406fd53bffe7def90881613d48b2efc"} Mar 12 20:49:24.401636 master-0 kubenswrapper[4038]: I0312 20:49:24.401607 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"75c69b856cbd2d569ae2cb8a4f4791caa3eb629fb1002e7271232a55d04c0d80"} Mar 12 20:49:24.401636 master-0 kubenswrapper[4038]: I0312 20:49:24.401625 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"deb8c80238cda8c53814ddd8e0785bce67fd467d10ea1798f6a3a6f1240daf73"} Mar 12 20:49:24.401636 master-0 kubenswrapper[4038]: I0312 20:49:24.401643 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"4ffea706d3065d728edc074a409037511dfef0073a71ccb403f4c4fa3d82686c"} Mar 12 20:49:24.401636 master-0 kubenswrapper[4038]: I0312 20:49:24.401657 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"d8e171f66fb3153f8ac3fb8eef1ecff5a152ec0b1d1f9f4375e4095fb3cd62a8"} Mar 12 20:49:24.402570 master-0 kubenswrapper[4038]: I0312 20:49:24.401672 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"6896933482ed9d6c1302b4d8eb7b428131b8ae684fbaaf4e12fd0274b2693fbc"} Mar 12 20:49:24.880211 master-0 kubenswrapper[4038]: I0312 20:49:24.880097 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:24.880597 master-0 kubenswrapper[4038]: E0312 20:49:24.880350 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:24.880995 master-0 kubenswrapper[4038]: I0312 20:49:24.880930 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:24.881154 master-0 kubenswrapper[4038]: E0312 20:49:24.881071 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:26.420319 master-0 kubenswrapper[4038]: I0312 20:49:26.419882 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"a1eca13266225fa0adcc321841fa40c352573b878539966ce8b26f4401d84de4"} Mar 12 20:49:26.879792 master-0 kubenswrapper[4038]: I0312 20:49:26.879658 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:26.880241 master-0 kubenswrapper[4038]: E0312 20:49:26.879965 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:26.880241 master-0 kubenswrapper[4038]: I0312 20:49:26.880018 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:26.880460 master-0 kubenswrapper[4038]: E0312 20:49:26.880256 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:27.693172 master-0 kubenswrapper[4038]: I0312 20:49:27.693067 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:27.694222 master-0 kubenswrapper[4038]: E0312 20:49:27.693287 4038 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:27.694222 master-0 kubenswrapper[4038]: E0312 20:49:27.693396 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:31.693367793 +0000 UTC m=+169.729049656 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:28.880000 master-0 kubenswrapper[4038]: I0312 20:49:28.879915 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:28.881336 master-0 kubenswrapper[4038]: I0312 20:49:28.880040 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:28.881336 master-0 kubenswrapper[4038]: E0312 20:49:28.880155 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:28.881336 master-0 kubenswrapper[4038]: E0312 20:49:28.880386 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:29.444856 master-0 kubenswrapper[4038]: I0312 20:49:29.443658 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" event={"ID":"c3daeefa-7842-464c-a6c9-01b44ebea477","Type":"ContainerStarted","Data":"09feac2d9336772bf1cb83d55a804c578ca63c9dfe2f0be99cddeb1a04f94a9c"} Mar 12 20:49:29.444856 master-0 kubenswrapper[4038]: I0312 20:49:29.444034 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:29.527465 master-0 kubenswrapper[4038]: I0312 20:49:29.527395 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:29.538079 master-0 kubenswrapper[4038]: I0312 20:49:29.538007 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:29.538356 master-0 kubenswrapper[4038]: E0312 20:49:29.538191 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 20:49:29.538356 master-0 kubenswrapper[4038]: E0312 20:49:29.538215 4038 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 20:49:29.538356 master-0 kubenswrapper[4038]: E0312 20:49:29.538228 4038 projected.go:194] Error preparing data for projected volume kube-api-access-csxwl for pod openshift-network-diagnostics/network-check-target-h26wj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:29.538356 master-0 kubenswrapper[4038]: E0312 20:49:29.538282 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl podName:5ad63582-bd60-41a1-9622-ee73ccf8a5e8 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:01.538262604 +0000 UTC m=+139.573944467 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-csxwl" (UniqueName: "kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl") pod "network-check-target-h26wj" (UID: "5ad63582-bd60-41a1-9622-ee73ccf8a5e8") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 20:49:29.567682 master-0 kubenswrapper[4038]: I0312 20:49:29.567566 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" podStartSLOduration=7.5675448119999995 podStartE2EDuration="7.567544812s" podCreationTimestamp="2026-03-12 20:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:29.485476151 +0000 UTC m=+107.521158044" watchObservedRunningTime="2026-03-12 20:49:29.567544812 +0000 UTC m=+107.603226675" Mar 12 20:49:30.008674 master-0 kubenswrapper[4038]: I0312 20:49:30.008272 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-h26wj"] Mar 12 20:49:30.009495 master-0 kubenswrapper[4038]: I0312 20:49:30.008767 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:30.009495 master-0 kubenswrapper[4038]: E0312 20:49:30.008906 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:30.020036 master-0 kubenswrapper[4038]: I0312 20:49:30.019966 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-brdcd"] Mar 12 20:49:30.020186 master-0 kubenswrapper[4038]: I0312 20:49:30.020148 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:30.020377 master-0 kubenswrapper[4038]: E0312 20:49:30.020311 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:30.449541 master-0 kubenswrapper[4038]: I0312 20:49:30.449473 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:30.449541 master-0 kubenswrapper[4038]: I0312 20:49:30.449539 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:30.483189 master-0 kubenswrapper[4038]: I0312 20:49:30.483075 4038 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:31.879177 master-0 kubenswrapper[4038]: I0312 20:49:31.879079 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:31.879177 master-0 kubenswrapper[4038]: I0312 20:49:31.879173 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:31.880352 master-0 kubenswrapper[4038]: E0312 20:49:31.879481 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:31.880352 master-0 kubenswrapper[4038]: E0312 20:49:31.879633 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:31.892186 master-0 kubenswrapper[4038]: I0312 20:49:31.892131 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 20:49:32.901434 master-0 kubenswrapper[4038]: I0312 20:49:32.901317 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.9012782499999998 podStartE2EDuration="1.90127825s" podCreationTimestamp="2026-03-12 20:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:32.90126342 +0000 UTC m=+110.936945323" watchObservedRunningTime="2026-03-12 20:49:32.90127825 +0000 UTC m=+110.936960163" Mar 12 20:49:33.880135 master-0 kubenswrapper[4038]: I0312 20:49:33.880025 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:33.880135 master-0 kubenswrapper[4038]: I0312 20:49:33.880089 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:33.881191 master-0 kubenswrapper[4038]: E0312 20:49:33.880203 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-h26wj" podUID="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" Mar 12 20:49:33.881191 master-0 kubenswrapper[4038]: E0312 20:49:33.880307 4038 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-brdcd" podUID="c8660437-633f-4132-8a61-fe998abb493e" Mar 12 20:49:34.770650 master-0 kubenswrapper[4038]: I0312 20:49:34.770508 4038 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 12 20:49:34.771744 master-0 kubenswrapper[4038]: I0312 20:49:34.770771 4038 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 12 20:49:34.819090 master-0 kubenswrapper[4038]: I0312 20:49:34.818987 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4"] Mar 12 20:49:34.819608 master-0 kubenswrapper[4038]: I0312 20:49:34.819557 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:34.824894 master-0 kubenswrapper[4038]: I0312 20:49:34.824732 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 20:49:34.825327 master-0 kubenswrapper[4038]: I0312 20:49:34.825267 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 20:49:34.825651 master-0 kubenswrapper[4038]: I0312 20:49:34.825613 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 20:49:34.837268 master-0 kubenswrapper[4038]: I0312 20:49:34.837158 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt"] Mar 12 20:49:34.837906 master-0 kubenswrapper[4038]: I0312 20:49:34.837862 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf"] Mar 12 20:49:34.838493 master-0 kubenswrapper[4038]: I0312 20:49:34.838446 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:34.839076 master-0 kubenswrapper[4038]: I0312 20:49:34.838956 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk"] Mar 12 20:49:34.839206 master-0 kubenswrapper[4038]: I0312 20:49:34.839091 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:34.844056 master-0 kubenswrapper[4038]: I0312 20:49:34.843994 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 20:49:34.844419 master-0 kubenswrapper[4038]: I0312 20:49:34.844379 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 20:49:34.844668 master-0 kubenswrapper[4038]: I0312 20:49:34.844580 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 20:49:34.844770 master-0 kubenswrapper[4038]: I0312 20:49:34.844746 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 20:49:34.847912 master-0 kubenswrapper[4038]: I0312 20:49:34.847800 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj"] Mar 12 20:49:34.848347 master-0 kubenswrapper[4038]: I0312 20:49:34.848296 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw"] Mar 12 20:49:34.848735 master-0 kubenswrapper[4038]: I0312 20:49:34.848685 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8"] Mar 12 20:49:34.849295 master-0 kubenswrapper[4038]: I0312 20:49:34.849241 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-tvrxp"] Mar 12 20:49:34.862880 master-0 kubenswrapper[4038]: I0312 20:49:34.862511 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 20:49:34.863190 master-0 kubenswrapper[4038]: I0312 20:49:34.862911 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.863190 master-0 kubenswrapper[4038]: I0312 20:49:34.863108 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 20:49:34.863411 master-0 kubenswrapper[4038]: I0312 20:49:34.863371 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 20:49:34.864192 master-0 kubenswrapper[4038]: I0312 20:49:34.864013 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949"] Mar 12 20:49:34.864638 master-0 kubenswrapper[4038]: I0312 20:49:34.864469 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9"] Mar 12 20:49:34.864638 master-0 kubenswrapper[4038]: I0312 20:49:34.864543 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:34.867000 master-0 kubenswrapper[4038]: I0312 20:49:34.866504 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:34.868993 master-0 kubenswrapper[4038]: I0312 20:49:34.868257 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:34.868993 master-0 kubenswrapper[4038]: I0312 20:49:34.868784 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:34.876947 master-0 kubenswrapper[4038]: I0312 20:49:34.871407 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:34.876947 master-0 kubenswrapper[4038]: I0312 20:49:34.871571 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx"] Mar 12 20:49:34.876947 master-0 kubenswrapper[4038]: I0312 20:49:34.872665 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:34.876947 master-0 kubenswrapper[4038]: I0312 20:49:34.873198 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:34.876947 master-0 kubenswrapper[4038]: I0312 20:49:34.873345 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:34.876947 master-0 kubenswrapper[4038]: I0312 20:49:34.873441 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh"] Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.895872 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.897663 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.897838 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.897975 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.898085 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.898111 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.898187 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.898281 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.898588 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 20:49:34.898606 master-0 kubenswrapper[4038]: I0312 20:49:34.898614 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 20:49:34.899457 master-0 kubenswrapper[4038]: I0312 20:49:34.899213 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs"] Mar 12 20:49:34.902000 master-0 kubenswrapper[4038]: I0312 20:49:34.899601 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6"] Mar 12 20:49:34.902000 master-0 kubenswrapper[4038]: I0312 20:49:34.899779 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:34.902000 master-0 kubenswrapper[4038]: I0312 20:49:34.899786 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 20:49:34.902000 master-0 kubenswrapper[4038]: I0312 20:49:34.900006 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:34.902000 master-0 kubenswrapper[4038]: I0312 20:49:34.900043 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt"] Mar 12 20:49:34.902000 master-0 kubenswrapper[4038]: I0312 20:49:34.900197 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.902939 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-zsd76"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.903390 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.903632 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.904512 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.904728 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.904948 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.904949 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.904997 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.906091 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.906236 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.906471 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.906571 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.906790 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-qpf68"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907051 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907200 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907347 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907441 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-98j9w"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907529 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907560 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.907574 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.908371 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.908451 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.909171 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.914878 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.915032 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.918211 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.918273 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk"] Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.920183 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 20:49:34.920367 master-0 kubenswrapper[4038]: I0312 20:49:34.920197 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.924998 master-0 kubenswrapper[4038]: I0312 20:49:34.924784 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.925108 master-0 kubenswrapper[4038]: I0312 20:49:34.925084 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 20:49:34.925626 master-0 kubenswrapper[4038]: I0312 20:49:34.925224 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 20:49:34.925626 master-0 kubenswrapper[4038]: I0312 20:49:34.925370 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 20:49:34.925626 master-0 kubenswrapper[4038]: I0312 20:49:34.925517 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 20:49:34.925742 master-0 kubenswrapper[4038]: I0312 20:49:34.925635 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 20:49:34.925824 master-0 kubenswrapper[4038]: I0312 20:49:34.925780 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 20:49:34.926065 master-0 kubenswrapper[4038]: I0312 20:49:34.926024 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 20:49:34.926218 master-0 kubenswrapper[4038]: I0312 20:49:34.926191 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 20:49:34.926353 master-0 kubenswrapper[4038]: I0312 20:49:34.926334 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.926493 master-0 kubenswrapper[4038]: I0312 20:49:34.926469 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.930116 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf"] Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.930286 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.930436 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.930860 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.930994 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931088 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931194 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931230 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931282 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931366 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.920176 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931434 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931576 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931590 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931707 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931727 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931867 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931900 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.931969 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.932076 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.932171 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.932768 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.932903 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.932998 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.934142 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.934374 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.934969 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.935966 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.936482 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.938339 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 20:49:34.944142 master-0 kubenswrapper[4038]: I0312 20:49:34.939244 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 20:49:34.946865 master-0 kubenswrapper[4038]: I0312 20:49:34.946773 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 20:49:34.947154 master-0 kubenswrapper[4038]: I0312 20:49:34.947123 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw"] Mar 12 20:49:34.947794 master-0 kubenswrapper[4038]: I0312 20:49:34.947752 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 20:49:34.949941 master-0 kubenswrapper[4038]: I0312 20:49:34.949655 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 20:49:34.955195 master-0 kubenswrapper[4038]: I0312 20:49:34.955150 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6"] Mar 12 20:49:34.955265 master-0 kubenswrapper[4038]: I0312 20:49:34.955210 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx"] Mar 12 20:49:34.955265 master-0 kubenswrapper[4038]: I0312 20:49:34.955227 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949"] Mar 12 20:49:34.955265 master-0 kubenswrapper[4038]: I0312 20:49:34.955242 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9"] Mar 12 20:49:34.955265 master-0 kubenswrapper[4038]: I0312 20:49:34.955258 4038 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-krpjj"] Mar 12 20:49:34.955918 master-0 kubenswrapper[4038]: I0312 20:49:34.955886 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:34.956527 master-0 kubenswrapper[4038]: I0312 20:49:34.956496 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk"] Mar 12 20:49:34.956977 master-0 kubenswrapper[4038]: I0312 20:49:34.956927 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 20:49:34.957131 master-0 kubenswrapper[4038]: I0312 20:49:34.957095 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8"] Mar 12 20:49:34.957528 master-0 kubenswrapper[4038]: I0312 20:49:34.957492 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 20:49:34.957822 master-0 kubenswrapper[4038]: I0312 20:49:34.957776 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9"] Mar 12 20:49:34.958500 master-0 kubenswrapper[4038]: I0312 20:49:34.958466 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt"] Mar 12 20:49:34.959208 master-0 kubenswrapper[4038]: I0312 20:49:34.959168 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-zsd76"] Mar 12 20:49:34.959944 master-0 kubenswrapper[4038]: I0312 20:49:34.959910 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5"] Mar 12 20:49:34.960710 master-0 kubenswrapper[4038]: I0312 20:49:34.960671 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-tvrxp"] Mar 12 20:49:34.961427 master-0 kubenswrapper[4038]: I0312 20:49:34.961354 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj"] Mar 12 20:49:34.962108 master-0 kubenswrapper[4038]: I0312 20:49:34.962070 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-qpf68"] Mar 12 20:49:34.962800 master-0 kubenswrapper[4038]: I0312 20:49:34.962767 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh"] Mar 12 20:49:34.963482 master-0 kubenswrapper[4038]: I0312 20:49:34.963447 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs"] Mar 12 20:49:34.965858 master-0 kubenswrapper[4038]: I0312 20:49:34.965798 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4"] Mar 12 20:49:34.968012 master-0 kubenswrapper[4038]: I0312 20:49:34.967981 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-98j9w"] Mar 12 20:49:34.994510 master-0 kubenswrapper[4038]: I0312 20:49:34.994458 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:34.994624 master-0 kubenswrapper[4038]: I0312 20:49:34.994554 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:34.994624 master-0 kubenswrapper[4038]: I0312 20:49:34.994602 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:34.994686 master-0 kubenswrapper[4038]: I0312 20:49:34.994644 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:34.994717 master-0 kubenswrapper[4038]: I0312 20:49:34.994698 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:34.994769 master-0 kubenswrapper[4038]: I0312 20:49:34.994737 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:34.994834 master-0 kubenswrapper[4038]: I0312 20:49:34.994788 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:34.994890 master-0 kubenswrapper[4038]: I0312 20:49:34.994861 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:34.994937 master-0 kubenswrapper[4038]: I0312 20:49:34.994911 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:34.994983 master-0 kubenswrapper[4038]: I0312 20:49:34.994957 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:34.995031 master-0 kubenswrapper[4038]: I0312 20:49:34.995003 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:34.995063 master-0 kubenswrapper[4038]: I0312 20:49:34.995048 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:34.995112 master-0 kubenswrapper[4038]: I0312 20:49:34.995086 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:34.995165 master-0 kubenswrapper[4038]: I0312 20:49:34.995133 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:34.995209 master-0 kubenswrapper[4038]: I0312 20:49:34.995184 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:34.995242 master-0 kubenswrapper[4038]: I0312 20:49:34.995225 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:34.995277 master-0 kubenswrapper[4038]: I0312 20:49:34.995263 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:34.995330 master-0 kubenswrapper[4038]: I0312 20:49:34.995300 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:34.995411 master-0 kubenswrapper[4038]: I0312 20:49:34.995379 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:34.995459 master-0 kubenswrapper[4038]: I0312 20:49:34.995433 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:34.995491 master-0 kubenswrapper[4038]: I0312 20:49:34.995468 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:34.995529 master-0 kubenswrapper[4038]: I0312 20:49:34.995506 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995531 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995571 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995600 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995653 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995729 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995791 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995845 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995938 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.995986 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.996033 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:34.996222 master-0 kubenswrapper[4038]: I0312 20:49:34.996057 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.096681 master-0 kubenswrapper[4038]: I0312 20:49:35.096626 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:35.096759 master-0 kubenswrapper[4038]: I0312 20:49:35.096683 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.096898 master-0 kubenswrapper[4038]: E0312 20:49:35.096862 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:35.096933 master-0 kubenswrapper[4038]: I0312 20:49:35.096902 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.096973 master-0 kubenswrapper[4038]: E0312 20:49:35.096945 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.59692302 +0000 UTC m=+113.632604883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:35.097009 master-0 kubenswrapper[4038]: I0312 20:49:35.096979 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:35.097036 master-0 kubenswrapper[4038]: I0312 20:49:35.097018 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.097064 master-0 kubenswrapper[4038]: I0312 20:49:35.097042 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.097098 master-0 kubenswrapper[4038]: E0312 20:49:35.097073 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:35.097215 master-0 kubenswrapper[4038]: E0312 20:49:35.097190 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.597170446 +0000 UTC m=+113.632852309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:35.097249 master-0 kubenswrapper[4038]: I0312 20:49:35.097228 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.097277 master-0 kubenswrapper[4038]: I0312 20:49:35.097253 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.097325 master-0 kubenswrapper[4038]: I0312 20:49:35.097303 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:35.097773 master-0 kubenswrapper[4038]: I0312 20:49:35.097740 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.099076 master-0 kubenswrapper[4038]: E0312 20:49:35.098503 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:35.099076 master-0 kubenswrapper[4038]: E0312 20:49:35.098590 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.59857309 +0000 UTC m=+113.634254963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:35.099152 master-0 kubenswrapper[4038]: I0312 20:49:35.098163 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:35.099152 master-0 kubenswrapper[4038]: I0312 20:49:35.099142 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.099350 master-0 kubenswrapper[4038]: I0312 20:49:35.099316 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:35.099662 master-0 kubenswrapper[4038]: I0312 20:49:35.099604 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.099773 master-0 kubenswrapper[4038]: I0312 20:49:35.099750 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.099844 master-0 kubenswrapper[4038]: I0312 20:49:35.099781 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:35.099844 master-0 kubenswrapper[4038]: I0312 20:49:35.099822 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.099904 master-0 kubenswrapper[4038]: I0312 20:49:35.099843 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.099904 master-0 kubenswrapper[4038]: I0312 20:49:35.099865 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.099904 master-0 kubenswrapper[4038]: I0312 20:49:35.099887 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.099904 master-0 kubenswrapper[4038]: I0312 20:49:35.099904 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.100049 master-0 kubenswrapper[4038]: I0312 20:49:35.099940 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:35.100049 master-0 kubenswrapper[4038]: I0312 20:49:35.099962 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.100049 master-0 kubenswrapper[4038]: I0312 20:49:35.099982 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.100049 master-0 kubenswrapper[4038]: I0312 20:49:35.100001 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.100049 master-0 kubenswrapper[4038]: I0312 20:49:35.100020 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.100049 master-0 kubenswrapper[4038]: I0312 20:49:35.100049 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.100199 master-0 kubenswrapper[4038]: I0312 20:49:35.100068 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.100199 master-0 kubenswrapper[4038]: I0312 20:49:35.100102 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:35.100199 master-0 kubenswrapper[4038]: I0312 20:49:35.100173 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.100366 master-0 kubenswrapper[4038]: I0312 20:49:35.100290 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.100366 master-0 kubenswrapper[4038]: I0312 20:49:35.100320 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.100430 master-0 kubenswrapper[4038]: I0312 20:49:35.100373 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.100430 master-0 kubenswrapper[4038]: I0312 20:49:35.100400 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.100535 master-0 kubenswrapper[4038]: E0312 20:49:35.100476 4038 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:35.100569 master-0 kubenswrapper[4038]: E0312 20:49:35.100548 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.600516278 +0000 UTC m=+113.636198141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:35.100616 master-0 kubenswrapper[4038]: I0312 20:49:35.100596 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.100648 master-0 kubenswrapper[4038]: I0312 20:49:35.100624 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.100648 master-0 kubenswrapper[4038]: I0312 20:49:35.100644 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:35.100707 master-0 kubenswrapper[4038]: I0312 20:49:35.100687 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:35.100735 master-0 kubenswrapper[4038]: I0312 20:49:35.100709 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.100763 master-0 kubenswrapper[4038]: I0312 20:49:35.100731 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:35.100790 master-0 kubenswrapper[4038]: I0312 20:49:35.100772 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.100865 master-0 kubenswrapper[4038]: I0312 20:49:35.100795 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.100865 master-0 kubenswrapper[4038]: I0312 20:49:35.100848 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.100920 master-0 kubenswrapper[4038]: I0312 20:49:35.100867 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.100920 master-0 kubenswrapper[4038]: I0312 20:49:35.100906 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:35.100972 master-0 kubenswrapper[4038]: I0312 20:49:35.100926 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.100972 master-0 kubenswrapper[4038]: I0312 20:49:35.100948 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.101026 master-0 kubenswrapper[4038]: I0312 20:49:35.101000 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.101054 master-0 kubenswrapper[4038]: I0312 20:49:35.101035 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.101146 master-0 kubenswrapper[4038]: I0312 20:49:35.101085 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:35.101146 master-0 kubenswrapper[4038]: I0312 20:49:35.101110 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.101219 master-0 kubenswrapper[4038]: I0312 20:49:35.101151 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.101219 master-0 kubenswrapper[4038]: I0312 20:49:35.101175 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:35.101269 master-0 kubenswrapper[4038]: I0312 20:49:35.101200 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101266 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101585 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101612 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101645 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101666 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101689 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101710 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.101730 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.102062 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.102382 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.102751 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: E0312 20:49:35.103272 4038 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:35.103536 master-0 kubenswrapper[4038]: I0312 20:49:35.103474 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.104254 master-0 kubenswrapper[4038]: I0312 20:49:35.104223 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.104882 master-0 kubenswrapper[4038]: E0312 20:49:35.104860 4038 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:35.104935 master-0 kubenswrapper[4038]: E0312 20:49:35.104926 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.604912488 +0000 UTC m=+113.640594351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:35.105023 master-0 kubenswrapper[4038]: I0312 20:49:35.104996 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.105095 master-0 kubenswrapper[4038]: I0312 20:49:35.105072 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.105134 master-0 kubenswrapper[4038]: I0312 20:49:35.105099 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.105228 master-0 kubenswrapper[4038]: I0312 20:49:35.105137 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.105228 master-0 kubenswrapper[4038]: E0312 20:49:35.105139 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.605124974 +0000 UTC m=+113.640806837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:35.105228 master-0 kubenswrapper[4038]: I0312 20:49:35.105173 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.105228 master-0 kubenswrapper[4038]: E0312 20:49:35.105220 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:35.105333 master-0 kubenswrapper[4038]: E0312 20:49:35.105252 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.605244677 +0000 UTC m=+113.640926540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:35.105333 master-0 kubenswrapper[4038]: I0312 20:49:35.105266 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.105398 master-0 kubenswrapper[4038]: I0312 20:49:35.105372 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.105428 master-0 kubenswrapper[4038]: I0312 20:49:35.105400 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.105428 master-0 kubenswrapper[4038]: I0312 20:49:35.105420 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.105504 master-0 kubenswrapper[4038]: I0312 20:49:35.105479 4038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.106054 master-0 kubenswrapper[4038]: I0312 20:49:35.106030 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.107047 master-0 kubenswrapper[4038]: I0312 20:49:35.107018 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.112228 master-0 kubenswrapper[4038]: I0312 20:49:35.112182 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.114456 master-0 kubenswrapper[4038]: I0312 20:49:35.114416 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.114619 master-0 kubenswrapper[4038]: I0312 20:49:35.114584 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.114700 master-0 kubenswrapper[4038]: E0312 20:49:35.114670 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:35.115917 master-0 kubenswrapper[4038]: I0312 20:49:35.115871 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.117988 master-0 kubenswrapper[4038]: E0312 20:49:35.117956 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.617930002 +0000 UTC m=+113.653611865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:35.122561 master-0 kubenswrapper[4038]: I0312 20:49:35.122533 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.124180 master-0 kubenswrapper[4038]: I0312 20:49:35.124153 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.125986 master-0 kubenswrapper[4038]: I0312 20:49:35.125949 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:35.126404 master-0 kubenswrapper[4038]: I0312 20:49:35.126367 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.127724 master-0 kubenswrapper[4038]: I0312 20:49:35.127664 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.128093 master-0 kubenswrapper[4038]: I0312 20:49:35.128019 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.129220 master-0 kubenswrapper[4038]: I0312 20:49:35.129191 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:35.130795 master-0 kubenswrapper[4038]: I0312 20:49:35.130736 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:35.130795 master-0 kubenswrapper[4038]: I0312 20:49:35.130730 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:35.130947 master-0 kubenswrapper[4038]: I0312 20:49:35.130881 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.131045 master-0 kubenswrapper[4038]: I0312 20:49:35.131013 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:35.131994 master-0 kubenswrapper[4038]: I0312 20:49:35.131959 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:35.203738 master-0 kubenswrapper[4038]: I0312 20:49:35.203645 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:35.206262 master-0 kubenswrapper[4038]: I0312 20:49:35.206216 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.206339 master-0 kubenswrapper[4038]: I0312 20:49:35.206280 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.206572 master-0 kubenswrapper[4038]: I0312 20:49:35.206521 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.206625 master-0 kubenswrapper[4038]: I0312 20:49:35.206602 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.206664 master-0 kubenswrapper[4038]: I0312 20:49:35.206624 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.206664 master-0 kubenswrapper[4038]: I0312 20:49:35.206647 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.206743 master-0 kubenswrapper[4038]: I0312 20:49:35.206687 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.206743 master-0 kubenswrapper[4038]: I0312 20:49:35.206721 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.206842 master-0 kubenswrapper[4038]: I0312 20:49:35.206747 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.206842 master-0 kubenswrapper[4038]: I0312 20:49:35.206780 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.206842 master-0 kubenswrapper[4038]: I0312 20:49:35.206819 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.206842 master-0 kubenswrapper[4038]: I0312 20:49:35.206839 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.206858 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.206886 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.206910 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.206926 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.206951 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.206969 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.207004 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.207022 master-0 kubenswrapper[4038]: I0312 20:49:35.207024 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.207531 master-0 kubenswrapper[4038]: I0312 20:49:35.207054 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.207785 master-0 kubenswrapper[4038]: I0312 20:49:35.207752 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.209224 master-0 kubenswrapper[4038]: I0312 20:49:35.209181 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.209353 master-0 kubenswrapper[4038]: I0312 20:49:35.209310 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.209434 master-0 kubenswrapper[4038]: I0312 20:49:35.209408 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.209967 master-0 kubenswrapper[4038]: E0312 20:49:35.209927 4038 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:35.210029 master-0 kubenswrapper[4038]: I0312 20:49:35.209961 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.210029 master-0 kubenswrapper[4038]: E0312 20:49:35.209998 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.709977431 +0000 UTC m=+113.745659284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:35.210201 master-0 kubenswrapper[4038]: I0312 20:49:35.210163 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211059 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211114 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211141 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211183 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211201 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211224 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.211248 master-0 kubenswrapper[4038]: I0312 20:49:35.211247 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211301 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211320 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211344 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211374 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211395 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211417 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211439 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211465 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.211757 master-0 kubenswrapper[4038]: I0312 20:49:35.211487 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.212308 master-0 kubenswrapper[4038]: I0312 20:49:35.211993 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.212682 master-0 kubenswrapper[4038]: I0312 20:49:35.212625 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.212835 master-0 kubenswrapper[4038]: I0312 20:49:35.212788 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.212929 master-0 kubenswrapper[4038]: E0312 20:49:35.212905 4038 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:35.212929 master-0 kubenswrapper[4038]: E0312 20:49:35.212909 4038 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:35.213024 master-0 kubenswrapper[4038]: E0312 20:49:35.212993 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.712938574 +0000 UTC m=+113.748620437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:35.213024 master-0 kubenswrapper[4038]: E0312 20:49:35.213013 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:35.713002276 +0000 UTC m=+113.748684139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:35.213126 master-0 kubenswrapper[4038]: I0312 20:49:35.213070 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.213391 master-0 kubenswrapper[4038]: I0312 20:49:35.213352 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.213698 master-0 kubenswrapper[4038]: I0312 20:49:35.213653 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.213846 master-0 kubenswrapper[4038]: I0312 20:49:35.213796 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.214242 master-0 kubenswrapper[4038]: I0312 20:49:35.214204 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.214424 master-0 kubenswrapper[4038]: I0312 20:49:35.214371 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.214508 master-0 kubenswrapper[4038]: I0312 20:49:35.214476 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.215036 master-0 kubenswrapper[4038]: I0312 20:49:35.214993 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.220130 master-0 kubenswrapper[4038]: I0312 20:49:35.217332 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.220130 master-0 kubenswrapper[4038]: I0312 20:49:35.218268 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.220130 master-0 kubenswrapper[4038]: I0312 20:49:35.218742 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.220130 master-0 kubenswrapper[4038]: I0312 20:49:35.219272 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.266561 master-0 kubenswrapper[4038]: I0312 20:49:35.266486 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.270223 master-0 kubenswrapper[4038]: I0312 20:49:35.270098 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:35.277466 master-0 kubenswrapper[4038]: I0312 20:49:35.277376 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:35.289405 master-0 kubenswrapper[4038]: I0312 20:49:35.289314 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:35.295743 master-0 kubenswrapper[4038]: I0312 20:49:35.295701 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:35.299520 master-0 kubenswrapper[4038]: I0312 20:49:35.299458 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.320627 master-0 kubenswrapper[4038]: I0312 20:49:35.315313 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:35.333895 master-0 kubenswrapper[4038]: I0312 20:49:35.332157 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.368094 master-0 kubenswrapper[4038]: I0312 20:49:35.362548 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.386974 master-0 kubenswrapper[4038]: I0312 20:49:35.375948 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:35.386974 master-0 kubenswrapper[4038]: I0312 20:49:35.383264 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.387585 master-0 kubenswrapper[4038]: I0312 20:49:35.387041 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.407299 master-0 kubenswrapper[4038]: I0312 20:49:35.407241 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.434796 master-0 kubenswrapper[4038]: I0312 20:49:35.431199 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.443556 master-0 kubenswrapper[4038]: I0312 20:49:35.443472 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.451957 master-0 kubenswrapper[4038]: I0312 20:49:35.451660 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:35.474480 master-0 kubenswrapper[4038]: I0312 20:49:35.472289 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:35.616504 master-0 kubenswrapper[4038]: I0312 20:49:35.616449 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:35.616669 master-0 kubenswrapper[4038]: I0312 20:49:35.616532 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.616669 master-0 kubenswrapper[4038]: I0312 20:49:35.616564 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:35.616669 master-0 kubenswrapper[4038]: I0312 20:49:35.616587 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:35.616669 master-0 kubenswrapper[4038]: I0312 20:49:35.616606 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:35.616669 master-0 kubenswrapper[4038]: I0312 20:49:35.616639 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:35.616885 master-0 kubenswrapper[4038]: I0312 20:49:35.616693 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:35.616885 master-0 kubenswrapper[4038]: E0312 20:49:35.616828 4038 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:35.616885 master-0 kubenswrapper[4038]: E0312 20:49:35.616880 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.616865107 +0000 UTC m=+114.652546970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:35.616972 master-0 kubenswrapper[4038]: E0312 20:49:35.616923 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:35.616972 master-0 kubenswrapper[4038]: E0312 20:49:35.616944 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.616937149 +0000 UTC m=+114.652619012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:35.617028 master-0 kubenswrapper[4038]: E0312 20:49:35.616977 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:35.617028 master-0 kubenswrapper[4038]: E0312 20:49:35.616996 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.61699 +0000 UTC m=+114.652671863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:35.617086 master-0 kubenswrapper[4038]: E0312 20:49:35.617030 4038 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:35.617086 master-0 kubenswrapper[4038]: E0312 20:49:35.617046 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.617041541 +0000 UTC m=+114.652723404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:35.617086 master-0 kubenswrapper[4038]: E0312 20:49:35.617082 4038 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:35.617253 master-0 kubenswrapper[4038]: E0312 20:49:35.617097 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.617092442 +0000 UTC m=+114.652774305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:35.617253 master-0 kubenswrapper[4038]: E0312 20:49:35.617096 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:35.617253 master-0 kubenswrapper[4038]: E0312 20:49:35.617129 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:35.617253 master-0 kubenswrapper[4038]: E0312 20:49:35.617147 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.617142184 +0000 UTC m=+114.652824047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:35.617253 master-0 kubenswrapper[4038]: E0312 20:49:35.617209 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.617179285 +0000 UTC m=+114.652861148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:35.642318 master-0 kubenswrapper[4038]: I0312 20:49:35.642264 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf"] Mar 12 20:49:35.648827 master-0 kubenswrapper[4038]: I0312 20:49:35.644682 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:35.652389 master-0 kubenswrapper[4038]: I0312 20:49:35.651645 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj"] Mar 12 20:49:35.654003 master-0 kubenswrapper[4038]: I0312 20:49:35.653961 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:35.654928 master-0 kubenswrapper[4038]: I0312 20:49:35.654906 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.656383 master-0 kubenswrapper[4038]: I0312 20:49:35.656358 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.662485 master-0 kubenswrapper[4038]: I0312 20:49:35.661251 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949"] Mar 12 20:49:35.662485 master-0 kubenswrapper[4038]: I0312 20:49:35.661775 4038 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.664542 master-0 kubenswrapper[4038]: I0312 20:49:35.663662 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh"] Mar 12 20:49:35.664542 master-0 kubenswrapper[4038]: I0312 20:49:35.664315 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:35.666416 master-0 kubenswrapper[4038]: I0312 20:49:35.665363 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:35.666777 master-0 kubenswrapper[4038]: W0312 20:49:35.666747 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07542516_49c8_4e20_9b97_798fbff850a5.slice/crio-82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e WatchSource:0}: Error finding container 82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e: Status 404 returned error can't find the container with id 82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e Mar 12 20:49:35.674285 master-0 kubenswrapper[4038]: I0312 20:49:35.673413 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx"] Mar 12 20:49:35.675736 master-0 kubenswrapper[4038]: I0312 20:49:35.675688 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9"] Mar 12 20:49:35.678039 master-0 kubenswrapper[4038]: W0312 20:49:35.677799 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3bebf49_1d92_4353_b84c_91ed86b7bb94.slice/crio-480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de WatchSource:0}: Error finding container 480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de: Status 404 returned error can't find the container with id 480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de Mar 12 20:49:35.680847 master-0 kubenswrapper[4038]: W0312 20:49:35.680652 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2604b035_853c_42b7_a562_07d46178868a.slice/crio-58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d WatchSource:0}: Error finding container 58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d: Status 404 returned error can't find the container with id 58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d Mar 12 20:49:35.682795 master-0 kubenswrapper[4038]: I0312 20:49:35.681250 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:35.685894 master-0 kubenswrapper[4038]: I0312 20:49:35.684782 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4"] Mar 12 20:49:35.685894 master-0 kubenswrapper[4038]: W0312 20:49:35.685231 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5471994f_769e_4124_b7d0_01f5358fc18f.slice/crio-2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70 WatchSource:0}: Error finding container 2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70: Status 404 returned error can't find the container with id 2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70 Mar 12 20:49:35.719156 master-0 kubenswrapper[4038]: I0312 20:49:35.719112 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:35.719351 master-0 kubenswrapper[4038]: E0312 20:49:35.719247 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:35.719351 master-0 kubenswrapper[4038]: I0312 20:49:35.719301 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:35.719435 master-0 kubenswrapper[4038]: E0312 20:49:35.719359 4038 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:35.719435 master-0 kubenswrapper[4038]: E0312 20:49:35.719412 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.719376245 +0000 UTC m=+114.755058108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:35.719623 master-0 kubenswrapper[4038]: I0312 20:49:35.719510 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:35.719623 master-0 kubenswrapper[4038]: E0312 20:49:35.719530 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.719498958 +0000 UTC m=+114.755180901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:35.719623 master-0 kubenswrapper[4038]: I0312 20:49:35.719593 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:35.719706 master-0 kubenswrapper[4038]: E0312 20:49:35.719677 4038 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:35.719706 master-0 kubenswrapper[4038]: E0312 20:49:35.719701 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.719693724 +0000 UTC m=+114.755375587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:35.719773 master-0 kubenswrapper[4038]: E0312 20:49:35.719741 4038 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:35.719773 master-0 kubenswrapper[4038]: E0312 20:49:35.719759 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:36.719754026 +0000 UTC m=+114.755435889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:35.840670 master-0 kubenswrapper[4038]: I0312 20:49:35.838006 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6"] Mar 12 20:49:35.880278 master-0 kubenswrapper[4038]: I0312 20:49:35.879651 4038 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:35.882935 master-0 kubenswrapper[4038]: I0312 20:49:35.882488 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 20:49:35.882935 master-0 kubenswrapper[4038]: I0312 20:49:35.882862 4038 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 20:49:35.912391 master-0 kubenswrapper[4038]: I0312 20:49:35.909613 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs"] Mar 12 20:49:35.920120 master-0 kubenswrapper[4038]: W0312 20:49:35.920036 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7623a5c6_47a9_4b75_bda8_c0a2d7c67272.slice/crio-1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73 WatchSource:0}: Error finding container 1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73: Status 404 returned error can't find the container with id 1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73 Mar 12 20:49:35.927214 master-0 kubenswrapper[4038]: I0312 20:49:35.923940 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk"] Mar 12 20:49:35.927214 master-0 kubenswrapper[4038]: I0312 20:49:35.925957 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-zsd76"] Mar 12 20:49:35.932298 master-0 kubenswrapper[4038]: W0312 20:49:35.932213 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod784599a3_a2ac_46ac_a4b7_9439704646cc.slice/crio-97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4 WatchSource:0}: Error finding container 97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4: Status 404 returned error can't find the container with id 97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4 Mar 12 20:49:35.937969 master-0 kubenswrapper[4038]: W0312 20:49:35.937933 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod980191fe_c62c_4b9e_879c_38fa8ce0a58b.slice/crio-c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85 WatchSource:0}: Error finding container c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85: Status 404 returned error can't find the container with id c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85 Mar 12 20:49:35.953321 master-0 kubenswrapper[4038]: I0312 20:49:35.953281 4038 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt"] Mar 12 20:49:35.960017 master-0 kubenswrapper[4038]: W0312 20:49:35.959959 4038 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a67ecf3_823d_4948_a5cb_8bd1eb9f259c.slice/crio-f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f WatchSource:0}: Error finding container f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f: Status 404 returned error can't find the container with id f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f Mar 12 20:49:36.479512 master-0 kubenswrapper[4038]: I0312 20:49:36.479351 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerStarted","Data":"1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73"} Mar 12 20:49:36.480900 master-0 kubenswrapper[4038]: I0312 20:49:36.480780 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerStarted","Data":"c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85"} Mar 12 20:49:36.483550 master-0 kubenswrapper[4038]: I0312 20:49:36.483401 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" event={"ID":"96bd86df-2101-47f5-844b-1332261c66f1","Type":"ContainerStarted","Data":"823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699"} Mar 12 20:49:36.485033 master-0 kubenswrapper[4038]: I0312 20:49:36.484569 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" event={"ID":"a3bebf49-1d92-4353-b84c-91ed86b7bb94","Type":"ContainerStarted","Data":"480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de"} Mar 12 20:49:36.487094 master-0 kubenswrapper[4038]: I0312 20:49:36.487035 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" event={"ID":"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c","Type":"ContainerStarted","Data":"f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f"} Mar 12 20:49:36.489439 master-0 kubenswrapper[4038]: I0312 20:49:36.489091 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" event={"ID":"5471994f-769e-4124-b7d0-01f5358fc18f","Type":"ContainerStarted","Data":"2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70"} Mar 12 20:49:36.490266 master-0 kubenswrapper[4038]: I0312 20:49:36.490213 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-krpjj" event={"ID":"617f0f9c-50d5-4214-b30f-5110fd4399ec","Type":"ContainerStarted","Data":"dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e"} Mar 12 20:49:36.491152 master-0 kubenswrapper[4038]: I0312 20:49:36.491090 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" event={"ID":"15ebfbd8-0782-431a-88a3-83af328498d2","Type":"ContainerStarted","Data":"a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83"} Mar 12 20:49:36.492164 master-0 kubenswrapper[4038]: I0312 20:49:36.492139 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" event={"ID":"2604b035-853c-42b7-a562-07d46178868a","Type":"ContainerStarted","Data":"58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d"} Mar 12 20:49:36.493257 master-0 kubenswrapper[4038]: I0312 20:49:36.493100 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerStarted","Data":"ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e"} Mar 12 20:49:36.494109 master-0 kubenswrapper[4038]: I0312 20:49:36.494040 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" event={"ID":"07542516-49c8-4e20-9b97-798fbff850a5","Type":"ContainerStarted","Data":"82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e"} Mar 12 20:49:36.495908 master-0 kubenswrapper[4038]: I0312 20:49:36.495792 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" event={"ID":"784599a3-a2ac-46ac-a4b7-9439704646cc","Type":"ContainerStarted","Data":"ab706de1955bf19700e84d8f799385030b60c4a92c4860f12c06db2b3816fd99"} Mar 12 20:49:36.495908 master-0 kubenswrapper[4038]: I0312 20:49:36.495867 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" event={"ID":"784599a3-a2ac-46ac-a4b7-9439704646cc","Type":"ContainerStarted","Data":"97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4"} Mar 12 20:49:36.498917 master-0 kubenswrapper[4038]: I0312 20:49:36.498829 4038 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerStarted","Data":"b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103"} Mar 12 20:49:36.513901 master-0 kubenswrapper[4038]: I0312 20:49:36.513718 4038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" podStartSLOduration=78.513703746 podStartE2EDuration="1m18.513703746s" podCreationTimestamp="2026-03-12 20:48:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:36.512844084 +0000 UTC m=+114.548525957" watchObservedRunningTime="2026-03-12 20:49:36.513703746 +0000 UTC m=+114.549385609" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647248 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647317 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647336 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647360 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647407 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647471 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: I0312 20:49:36.647490 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.647617 4038 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.647668 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.647652726 +0000 UTC m=+116.683334589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.648093 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.648119 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.648111238 +0000 UTC m=+116.683793101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.648151 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.648167 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.648161809 +0000 UTC m=+116.683843662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.648196 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:36.655848 master-0 kubenswrapper[4038]: E0312 20:49:36.648211 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.64820654 +0000 UTC m=+116.683888403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:36.656563 master-0 kubenswrapper[4038]: E0312 20:49:36.648242 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:36.656563 master-0 kubenswrapper[4038]: E0312 20:49:36.648261 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.648253531 +0000 UTC m=+116.683935394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:36.656563 master-0 kubenswrapper[4038]: E0312 20:49:36.649192 4038 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:36.656563 master-0 kubenswrapper[4038]: E0312 20:49:36.649214 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.649207564 +0000 UTC m=+116.684889427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:36.656563 master-0 kubenswrapper[4038]: E0312 20:49:36.649451 4038 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:36.656563 master-0 kubenswrapper[4038]: E0312 20:49:36.649471 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.649464541 +0000 UTC m=+116.685146394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:36.748844 master-0 kubenswrapper[4038]: I0312 20:49:36.748708 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:36.748844 master-0 kubenswrapper[4038]: I0312 20:49:36.748799 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:36.748844 master-0 kubenswrapper[4038]: I0312 20:49:36.748838 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:36.749029 master-0 kubenswrapper[4038]: I0312 20:49:36.748861 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:36.749166 master-0 kubenswrapper[4038]: E0312 20:49:36.749136 4038 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:36.749198 master-0 kubenswrapper[4038]: E0312 20:49:36.749193 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.74917943 +0000 UTC m=+116.784861283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:36.749603 master-0 kubenswrapper[4038]: E0312 20:49:36.749578 4038 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:36.749638 master-0 kubenswrapper[4038]: E0312 20:49:36.749612 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.749601661 +0000 UTC m=+116.785283524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:36.749671 master-0 kubenswrapper[4038]: E0312 20:49:36.749644 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:36.749671 master-0 kubenswrapper[4038]: E0312 20:49:36.749661 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.749656182 +0000 UTC m=+116.785338045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:36.749737 master-0 kubenswrapper[4038]: E0312 20:49:36.749693 4038 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:36.749737 master-0 kubenswrapper[4038]: E0312 20:49:36.749713 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:38.749707743 +0000 UTC m=+116.785389616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:38.660382 master-0 kubenswrapper[4038]: I0312 20:49:38.659919 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: I0312 20:49:38.660433 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: E0312 20:49:38.660167 4038 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: E0312 20:49:38.660659 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.660621165 +0000 UTC m=+120.696303068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: E0312 20:49:38.660669 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: I0312 20:49:38.660545 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: E0312 20:49:38.660740 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.660719468 +0000 UTC m=+120.696401381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: E0312 20:49:38.660673 4038 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:38.661142 master-0 kubenswrapper[4038]: I0312 20:49:38.661039 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661161 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.661117847 +0000 UTC m=+120.696799750 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661177 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: I0312 20:49:38.661211 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661223 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.661210519 +0000 UTC m=+120.696892462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661277 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661311 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.661301472 +0000 UTC m=+120.696983435 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: I0312 20:49:38.661305 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661362 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: E0312 20:49:38.661391 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.661383305 +0000 UTC m=+120.697065168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:38.661488 master-0 kubenswrapper[4038]: I0312 20:49:38.661433 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:38.661789 master-0 kubenswrapper[4038]: E0312 20:49:38.661765 4038 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:38.661962 master-0 kubenswrapper[4038]: E0312 20:49:38.661887 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.661863386 +0000 UTC m=+120.697545289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:38.782226 master-0 kubenswrapper[4038]: I0312 20:49:38.782111 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:38.782559 master-0 kubenswrapper[4038]: E0312 20:49:38.782446 4038 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:38.782559 master-0 kubenswrapper[4038]: E0312 20:49:38.782510 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:38.782733 master-0 kubenswrapper[4038]: E0312 20:49:38.782584 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.782553947 +0000 UTC m=+120.818235850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:38.782733 master-0 kubenswrapper[4038]: E0312 20:49:38.782614 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.782600528 +0000 UTC m=+120.818282551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: I0312 20:49:38.782183 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: I0312 20:49:38.804848 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: E0312 20:49:38.805132 4038 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: I0312 20:49:38.805222 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: E0312 20:49:38.805333 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.805307642 +0000 UTC m=+120.840989505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: E0312 20:49:38.805388 4038 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:39.073346 master-0 kubenswrapper[4038]: E0312 20:49:38.805508 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:42.805474626 +0000 UTC m=+120.841156569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:42.730732 master-0 kubenswrapper[4038]: I0312 20:49:42.730622 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:42.730732 master-0 kubenswrapper[4038]: I0312 20:49:42.730711 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: I0312 20:49:42.730756 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.730862 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.730941 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.730917537 +0000 UTC m=+128.766599430 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.731022 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: I0312 20:49:42.731069 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.731150 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.731115851 +0000 UTC m=+128.766797754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.731268 4038 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.731382 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.731352627 +0000 UTC m=+128.767034530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.731404 4038 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: E0312 20:49:42.731465 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.731446179 +0000 UTC m=+128.767128072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:42.731767 master-0 kubenswrapper[4038]: I0312 20:49:42.731745 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: I0312 20:49:42.731849 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: E0312 20:49:42.731901 4038 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: I0312 20:49:42.731928 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: E0312 20:49:42.731964 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.731946323 +0000 UTC m=+128.767628226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: E0312 20:49:42.732060 4038 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: E0312 20:49:42.732148 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: E0312 20:49:42.732150 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.732126107 +0000 UTC m=+128.767808050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:42.732504 master-0 kubenswrapper[4038]: E0312 20:49:42.732226 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.732206869 +0000 UTC m=+128.767888762 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:42.832778 master-0 kubenswrapper[4038]: I0312 20:49:42.832710 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:42.833075 master-0 kubenswrapper[4038]: I0312 20:49:42.832797 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:42.833075 master-0 kubenswrapper[4038]: E0312 20:49:42.833003 4038 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:42.833075 master-0 kubenswrapper[4038]: E0312 20:49:42.833077 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.833055016 +0000 UTC m=+128.868736909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:42.833290 master-0 kubenswrapper[4038]: E0312 20:49:42.833162 4038 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:42.833290 master-0 kubenswrapper[4038]: E0312 20:49:42.833286 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.833254531 +0000 UTC m=+128.868936434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:42.833451 master-0 kubenswrapper[4038]: I0312 20:49:42.833404 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:42.833592 master-0 kubenswrapper[4038]: I0312 20:49:42.833530 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:42.833689 master-0 kubenswrapper[4038]: E0312 20:49:42.833543 4038 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:42.833689 master-0 kubenswrapper[4038]: E0312 20:49:42.833601 4038 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:42.833843 master-0 kubenswrapper[4038]: E0312 20:49:42.833668 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.833652271 +0000 UTC m=+128.869334174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:42.833843 master-0 kubenswrapper[4038]: E0312 20:49:42.833764 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.833748734 +0000 UTC m=+128.869430637 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:46.781930 master-0 kubenswrapper[4038]: I0312 20:49:46.781439 4038 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:46.783350 master-0 kubenswrapper[4038]: I0312 20:49:46.783290 4038 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 20:49:46.792930 master-0 kubenswrapper[4038]: E0312 20:49:46.792721 4038 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:49:46.792930 master-0 kubenswrapper[4038]: E0312 20:49:46.792872 4038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:50:50.79283892 +0000 UTC m=+188.828520823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:49:47.415554 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 12 20:49:47.443270 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 20:49:47.443538 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 12 20:49:47.444943 master-0 systemd[1]: kubelet.service: Consumed 10.863s CPU time. Mar 12 20:49:47.463437 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 20:49:47.576274 master-0 kubenswrapper[7484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:49:47.576274 master-0 kubenswrapper[7484]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 20:49:47.576274 master-0 kubenswrapper[7484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:49:47.576274 master-0 kubenswrapper[7484]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:49:47.577266 master-0 kubenswrapper[7484]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 20:49:47.577266 master-0 kubenswrapper[7484]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 20:49:47.577266 master-0 kubenswrapper[7484]: I0312 20:49:47.576481 7484 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581332 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581369 7484 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581375 7484 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581380 7484 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581384 7484 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581388 7484 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:49:47.581383 master-0 kubenswrapper[7484]: W0312 20:49:47.581394 7484 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581400 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581405 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581408 7484 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581412 7484 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581416 7484 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581419 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581423 7484 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581426 7484 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581429 7484 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581433 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581436 7484 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581440 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581443 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581447 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581451 7484 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581454 7484 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581458 7484 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581461 7484 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581465 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:49:47.581625 master-0 kubenswrapper[7484]: W0312 20:49:47.581593 7484 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581598 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581601 7484 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581604 7484 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581608 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581614 7484 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581617 7484 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581621 7484 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581625 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581629 7484 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581634 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581640 7484 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581646 7484 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581652 7484 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581657 7484 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581663 7484 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581669 7484 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581674 7484 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581678 7484 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:49:47.582302 master-0 kubenswrapper[7484]: W0312 20:49:47.581682 7484 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581686 7484 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581690 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581693 7484 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581697 7484 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581701 7484 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581704 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581708 7484 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581711 7484 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581715 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581718 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581723 7484 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581727 7484 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581749 7484 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581753 7484 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581756 7484 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581760 7484 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581763 7484 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581767 7484 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581771 7484 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:49:47.582762 master-0 kubenswrapper[7484]: W0312 20:49:47.581775 7484 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: W0312 20:49:47.581778 7484 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: W0312 20:49:47.581782 7484 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: W0312 20:49:47.581785 7484 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: W0312 20:49:47.581789 7484 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: W0312 20:49:47.581792 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: W0312 20:49:47.581796 7484 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581917 7484 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581929 7484 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581936 7484 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581942 7484 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581948 7484 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581953 7484 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581959 7484 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581965 7484 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581970 7484 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581974 7484 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581979 7484 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581984 7484 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581988 7484 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581992 7484 flags.go:64] FLAG: --cgroup-root="" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.581997 7484 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.582001 7484 flags.go:64] FLAG: --client-ca-file="" Mar 12 20:49:47.583278 master-0 kubenswrapper[7484]: I0312 20:49:47.582005 7484 flags.go:64] FLAG: --cloud-config="" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582009 7484 flags.go:64] FLAG: --cloud-provider="" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582050 7484 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582058 7484 flags.go:64] FLAG: --cluster-domain="" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582062 7484 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582066 7484 flags.go:64] FLAG: --config-dir="" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582070 7484 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582075 7484 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582082 7484 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582086 7484 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582090 7484 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582096 7484 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582100 7484 flags.go:64] FLAG: --contention-profiling="false" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582106 7484 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582111 7484 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582116 7484 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582121 7484 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582128 7484 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582133 7484 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582137 7484 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582141 7484 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582145 7484 flags.go:64] FLAG: --enable-server="true" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582149 7484 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582154 7484 flags.go:64] FLAG: --event-burst="100" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582159 7484 flags.go:64] FLAG: --event-qps="50" Mar 12 20:49:47.583906 master-0 kubenswrapper[7484]: I0312 20:49:47.582162 7484 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582166 7484 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582171 7484 flags.go:64] FLAG: --eviction-hard="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582176 7484 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582180 7484 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582186 7484 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582191 7484 flags.go:64] FLAG: --eviction-soft="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582196 7484 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582200 7484 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582204 7484 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582208 7484 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582213 7484 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582217 7484 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582221 7484 flags.go:64] FLAG: --feature-gates="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582226 7484 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582231 7484 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582235 7484 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582240 7484 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582244 7484 flags.go:64] FLAG: --healthz-port="10248" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582248 7484 flags.go:64] FLAG: --help="false" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582252 7484 flags.go:64] FLAG: --hostname-override="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582256 7484 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582261 7484 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582265 7484 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582269 7484 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 20:49:47.584477 master-0 kubenswrapper[7484]: I0312 20:49:47.582273 7484 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582277 7484 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582281 7484 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582285 7484 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582289 7484 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582294 7484 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582299 7484 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582303 7484 flags.go:64] FLAG: --kube-reserved="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582307 7484 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582312 7484 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582317 7484 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582321 7484 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582326 7484 flags.go:64] FLAG: --lock-file="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582332 7484 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582336 7484 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582340 7484 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582348 7484 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582353 7484 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582357 7484 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582362 7484 flags.go:64] FLAG: --logging-format="text" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582366 7484 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582371 7484 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582375 7484 flags.go:64] FLAG: --manifest-url="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582379 7484 flags.go:64] FLAG: --manifest-url-header="" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582386 7484 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 20:49:47.585492 master-0 kubenswrapper[7484]: I0312 20:49:47.582390 7484 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582396 7484 flags.go:64] FLAG: --max-pods="110" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582456 7484 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582465 7484 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582470 7484 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582476 7484 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582482 7484 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582486 7484 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582491 7484 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582536 7484 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582542 7484 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582548 7484 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582554 7484 flags.go:64] FLAG: --pod-cidr="" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582559 7484 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582568 7484 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582574 7484 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582579 7484 flags.go:64] FLAG: --pods-per-core="0" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582584 7484 flags.go:64] FLAG: --port="10250" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582649 7484 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582658 7484 flags.go:64] FLAG: --provider-id="" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582664 7484 flags.go:64] FLAG: --qos-reserved="" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582670 7484 flags.go:64] FLAG: --read-only-port="10255" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582676 7484 flags.go:64] FLAG: --register-node="true" Mar 12 20:49:47.586099 master-0 kubenswrapper[7484]: I0312 20:49:47.582703 7484 flags.go:64] FLAG: --register-schedulable="true" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582709 7484 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582718 7484 flags.go:64] FLAG: --registry-burst="10" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582722 7484 flags.go:64] FLAG: --registry-qps="5" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582729 7484 flags.go:64] FLAG: --reserved-cpus="" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582734 7484 flags.go:64] FLAG: --reserved-memory="" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582741 7484 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582747 7484 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582753 7484 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582758 7484 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582784 7484 flags.go:64] FLAG: --runonce="false" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582791 7484 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582797 7484 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582849 7484 flags.go:64] FLAG: --seccomp-default="false" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582857 7484 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582862 7484 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582869 7484 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582875 7484 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582880 7484 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582889 7484 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582894 7484 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582899 7484 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582905 7484 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582910 7484 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582916 7484 flags.go:64] FLAG: --system-cgroups="" Mar 12 20:49:47.586691 master-0 kubenswrapper[7484]: I0312 20:49:47.582921 7484 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582930 7484 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582935 7484 flags.go:64] FLAG: --tls-cert-file="" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582941 7484 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582949 7484 flags.go:64] FLAG: --tls-min-version="" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582954 7484 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582959 7484 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582964 7484 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.582994 7484 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.583001 7484 flags.go:64] FLAG: --v="2" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.583009 7484 flags.go:64] FLAG: --version="false" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.583016 7484 flags.go:64] FLAG: --vmodule="" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.583023 7484 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: I0312 20:49:47.583029 7484 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583166 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583173 7484 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583178 7484 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583183 7484 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583187 7484 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583220 7484 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583225 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583229 7484 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583233 7484 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:49:47.587373 master-0 kubenswrapper[7484]: W0312 20:49:47.583238 7484 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583243 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583247 7484 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583254 7484 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583259 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583265 7484 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583271 7484 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583277 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583282 7484 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583287 7484 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583292 7484 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583297 7484 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583302 7484 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583307 7484 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583312 7484 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583318 7484 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583323 7484 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583327 7484 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583335 7484 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:49:47.587974 master-0 kubenswrapper[7484]: W0312 20:49:47.583339 7484 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583344 7484 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583348 7484 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583352 7484 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583358 7484 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583364 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583369 7484 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583374 7484 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583378 7484 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583384 7484 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583388 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583393 7484 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583397 7484 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583401 7484 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583406 7484 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583411 7484 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583420 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583424 7484 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583429 7484 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583433 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:49:47.588470 master-0 kubenswrapper[7484]: W0312 20:49:47.583438 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583443 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583447 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583451 7484 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583456 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583460 7484 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583465 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583470 7484 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583474 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583479 7484 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583483 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583490 7484 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583495 7484 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583500 7484 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583504 7484 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583508 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583514 7484 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583520 7484 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583525 7484 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583530 7484 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:49:47.588953 master-0 kubenswrapper[7484]: W0312 20:49:47.583534 7484 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:49:47.589425 master-0 kubenswrapper[7484]: W0312 20:49:47.583538 7484 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:49:47.589425 master-0 kubenswrapper[7484]: W0312 20:49:47.583543 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:49:47.589425 master-0 kubenswrapper[7484]: W0312 20:49:47.583547 7484 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:49:47.589425 master-0 kubenswrapper[7484]: I0312 20:49:47.583564 7484 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 20:49:47.593276 master-0 kubenswrapper[7484]: I0312 20:49:47.593212 7484 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 20:49:47.593276 master-0 kubenswrapper[7484]: I0312 20:49:47.593271 7484 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 20:49:47.593410 master-0 kubenswrapper[7484]: W0312 20:49:47.593381 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:49:47.593410 master-0 kubenswrapper[7484]: W0312 20:49:47.593403 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593411 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593421 7484 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593428 7484 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593436 7484 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593443 7484 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593449 7484 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593456 7484 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593463 7484 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:49:47.593467 master-0 kubenswrapper[7484]: W0312 20:49:47.593470 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593478 7484 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593485 7484 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593492 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593499 7484 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593506 7484 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593512 7484 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593519 7484 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593528 7484 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593539 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593547 7484 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593555 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593562 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593570 7484 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593577 7484 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593585 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593591 7484 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593598 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593605 7484 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:49:47.593741 master-0 kubenswrapper[7484]: W0312 20:49:47.593615 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593622 7484 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593630 7484 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593637 7484 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593644 7484 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593651 7484 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593659 7484 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593666 7484 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593674 7484 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593681 7484 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593687 7484 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593695 7484 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593704 7484 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593713 7484 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593721 7484 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593728 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593735 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593743 7484 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593750 7484 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:49:47.594217 master-0 kubenswrapper[7484]: W0312 20:49:47.593760 7484 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593769 7484 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593777 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593785 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593792 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593798 7484 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593828 7484 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593837 7484 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593844 7484 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593851 7484 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593858 7484 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593864 7484 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593870 7484 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593877 7484 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593884 7484 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593891 7484 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593898 7484 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593906 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593913 7484 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593920 7484 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:49:47.594668 master-0 kubenswrapper[7484]: W0312 20:49:47.593928 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.593935 7484 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.593942 7484 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.593949 7484 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: I0312 20:49:47.593960 7484 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594158 7484 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594172 7484 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594181 7484 feature_gate.go:330] unrecognized feature gate: Example Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594190 7484 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594198 7484 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594205 7484 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594212 7484 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594219 7484 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594228 7484 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594236 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 20:49:47.595172 master-0 kubenswrapper[7484]: W0312 20:49:47.594243 7484 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594250 7484 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594257 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594265 7484 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594271 7484 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594278 7484 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594285 7484 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594294 7484 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594302 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594311 7484 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594317 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594325 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594331 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594338 7484 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594346 7484 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594353 7484 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594361 7484 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594370 7484 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594378 7484 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 20:49:47.595541 master-0 kubenswrapper[7484]: W0312 20:49:47.594386 7484 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594392 7484 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594399 7484 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594407 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594415 7484 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594423 7484 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594430 7484 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594437 7484 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594444 7484 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594450 7484 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594457 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594463 7484 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594470 7484 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594477 7484 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594484 7484 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594490 7484 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594500 7484 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594507 7484 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594514 7484 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594521 7484 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 20:49:47.596029 master-0 kubenswrapper[7484]: W0312 20:49:47.594528 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594535 7484 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594542 7484 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594549 7484 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594555 7484 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594562 7484 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594569 7484 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594577 7484 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594584 7484 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594590 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594597 7484 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594604 7484 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594611 7484 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594617 7484 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594624 7484 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594632 7484 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594639 7484 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594646 7484 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594653 7484 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594659 7484 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 20:49:47.596480 master-0 kubenswrapper[7484]: W0312 20:49:47.594666 7484 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 20:49:47.597206 master-0 kubenswrapper[7484]: W0312 20:49:47.594673 7484 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 20:49:47.597206 master-0 kubenswrapper[7484]: W0312 20:49:47.594680 7484 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 20:49:47.597206 master-0 kubenswrapper[7484]: I0312 20:49:47.594692 7484 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 20:49:47.597206 master-0 kubenswrapper[7484]: I0312 20:49:47.594972 7484 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 20:49:47.597694 master-0 kubenswrapper[7484]: I0312 20:49:47.597655 7484 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 12 20:49:47.597867 master-0 kubenswrapper[7484]: I0312 20:49:47.597833 7484 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 20:49:47.598251 master-0 kubenswrapper[7484]: I0312 20:49:47.598220 7484 server.go:997] "Starting client certificate rotation" Mar 12 20:49:47.598251 master-0 kubenswrapper[7484]: I0312 20:49:47.598246 7484 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 20:49:47.598569 master-0 kubenswrapper[7484]: I0312 20:49:47.598435 7484 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 15:28:31.723044891 +0000 UTC Mar 12 20:49:47.598606 master-0 kubenswrapper[7484]: I0312 20:49:47.598574 7484 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h38m44.124475657s for next certificate rotation Mar 12 20:49:47.600520 master-0 kubenswrapper[7484]: I0312 20:49:47.600485 7484 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 20:49:47.602493 master-0 kubenswrapper[7484]: I0312 20:49:47.602457 7484 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 20:49:47.605762 master-0 kubenswrapper[7484]: I0312 20:49:47.605728 7484 log.go:25] "Validated CRI v1 runtime API" Mar 12 20:49:47.609676 master-0 kubenswrapper[7484]: I0312 20:49:47.609519 7484 log.go:25] "Validated CRI v1 image API" Mar 12 20:49:47.611703 master-0 kubenswrapper[7484]: I0312 20:49:47.610970 7484 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 20:49:47.617149 master-0 kubenswrapper[7484]: I0312 20:49:47.617083 7484 fs.go:135] Filesystem UUIDs: map[6486df99-a83a-4de4-8a94-6816f327ffeb:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 12 20:49:47.617719 master-0 kubenswrapper[7484]: I0312 20:49:47.617163 7484 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de/userdata/shm major:0 minor:242 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4/userdata/shm major:0 minor:288 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83/userdata/shm major:0 minor:235 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~projected/kube-api-access-k5v9f:{mountpoint:/var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~projected/kube-api-access-k5v9f major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~projected/kube-api-access-bhcsd:{mountpoint:/var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~projected/kube-api-access-bhcsd major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~projected/kube-api-access-z9xld:{mountpoint:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~projected/kube-api-access-z9xld major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~projected/kube-api-access-mbbc5:{mountpoint:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~projected/kube-api-access-mbbc5 major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1a307172-f010-4bad-a3fc-31607574b069/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1a307172-f010-4bad-a3fc-31607574b069/volumes/kubernetes.io~projected/kube-api-access major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~projected/kube-api-access-b9z6l:{mountpoint:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~projected/kube-api-access-b9z6l major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2604b035-853c-42b7-a562-07d46178868a/volumes/kubernetes.io~projected/kube-api-access-clp9l:{mountpoint:/var/lib/kubelet/pods/2604b035-853c-42b7-a562-07d46178868a/volumes/kubernetes.io~projected/kube-api-access-clp9l major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/kube-api-access-8vvf6:{mountpoint:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/kube-api-access-8vvf6 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~projected/kube-api-access-8rjm8:{mountpoint:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~projected/kube-api-access-8rjm8 major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~secret/serving-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~projected/kube-api-access-kzwrw:{mountpoint:/var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~projected/kube-api-access-kzwrw major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~projected/kube-api-access-f7rrv:{mountpoint:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~projected/kube-api-access-f7rrv major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/etcd-client major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/serving-cert major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/617f0f9c-50d5-4214-b30f-5110fd4399ec/volumes/kubernetes.io~projected/kube-api-access-f2r2r:{mountpoint:/var/lib/kubelet/pods/617f0f9c-50d5-4214-b30f-5110fd4399ec/volumes/kubernetes.io~projected/kube-api-access-f2r2r major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e54b24-bf9d-42a8-b012-c7b073c6f6a6/volumes/kubernetes.io~projected/kube-api-access-mfsvw:{mountpoint:/var/lib/kubelet/pods/70e54b24-bf9d-42a8-b012-c7b073c6f6a6/volumes/kubernetes.io~projected/kube-api-access-mfsvw major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~projected/kube-api-access-q78vj:{mountpoint:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~projected/kube-api-access-q78vj major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~secret/serving-cert major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~secret/serving-cert major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~projected/kube-api-access-6j7lq:{mountpoint:/var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~projected/kube-api-access-6j7lq major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/kube-api-access-rvkp7:{mountpoint:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/kube-api-access-rvkp7 major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~secret/serving-cert major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~projected/kube-api-access-2wt5q:{mountpoint:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~projected/kube-api-access-2wt5q major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~secret/serving-cert major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~projected/kube-api-access-2lltk:{mountpoint:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~projected/kube-api-access-2lltk major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~projected/kube-api-access-258hz:{mountpoint:/var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~projected/kube-api-access-258hz major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~projected/kube-api-access-577p4:{mountpoint:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~projected/kube-api-access-577p4 major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~secret/serving-cert major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8/volumes/kubernetes.io~projected/kube-api-access-7bk7q:{mountpoint:/var/lib/kubelet/pods/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8/volumes/kubernetes.io~projected/kube-api-access-7bk7q major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~projected/kube-api-access-2w68c:{mountpoint:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~projected/kube-api-access-2w68c major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~projected/kube-api-access-jrk7w:{mountpoint:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~projected/kube-api-access-jrk7w major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~projected/kube-api-access-zlch7:{mountpoint:/var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~projected/kube-api-access-zlch7 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~projected/kube-api-access-jx64q:{mountpoint:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~projected/kube-api-access-jx64q major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~projected/kube-api-access-c5c6t:{mountpoint:/var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~projected/kube-api-access-c5c6t major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f8f4400c-474c-480f-b46c-cf7c80555004/volumes/kubernetes.io~projected/kube-api-access-vjh5f:{mountpoint:/var/lib/kubelet/pods/f8f4400c-474c-480f-b46c-cf7c80555004/volumes/kubernetes.io~projected/kube-api-access-vjh5f major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~projected/kube-api-access-2kng9:{mountpoint:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~projected/kube-api-access-2kng9 major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/6a2f1369b57181f1cbf9998644dd74724c5b6a1130252684b5a482090c9ed593/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/e1511e8935b3698f6fd17b056c8c2b7f7aeb054fe406d1b2a58c1a96e5afe7df/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/b2ff3998b866109de7e3fc86acb1af07beb8e32c3630691045dfb6b10922cf4a/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/d5ff82bdec5a2ca10fb511fcf89c36920bb8089767880c83ea6b47d7d28f39f5/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/ed7d513af8ecab5a616b65b487f51eeeabf9332d79742ccc06e55b557ad910e4/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/16853c595027b8619ae53c140e3b9e784af26e21c2b9ca8fa290447d9e87a354/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/513e0f2371f040ee25685a410ca55c2a19aaf3bb420daafa8c17d089d34452ae/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/462d83d2369d0fccb6af59deb1524cc92e7b50a03b195266273361d19ce1a85a/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/a0cd6eee352320c76fc77cedb717fd1237e63101ef10ea1aed2d9715d1c2800b/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/eacbad8b93da7fa22082cca8eae055d06118e2a55514b1fffac2c38e0803f994/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/a2b58b8f278f37f3fd08aba9023534896e1cf53939c895539d2f34c5c7bbbe99/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/7a465ba32b56f0a98dd89816ca5cb50571f45932efe5c2e19c84c43cdb569a1a/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/a582be4c77bd96d5cfe5afcd28cf7d626f27dccd8bb0c893b40d8add5bff9f94/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/db4d258f3de8c1387590563998ec0503049482de91579ce56a0c4b3d70aa78f7/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/51ff99408a3c4de10a60d75616037e37886e31be76e932af169978ecb59e3776/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/f8b81d29e75bcebed9699b19533287efe21bf5d778092bbf8f9edd5f70961e86/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/28857a049862cef25c3e0859973b956b4cf7e285b027f7e200eb189c2cbfafc7/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/735df3671552a50941bbd4fbb2e15964f4ef625b0c5adcff60c5c3aa0b08703b/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/6e602963598cf529b6f6159d2bd89ab8036d6f5c529669517e74cae8b04de374/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/757eec120d44dcd5735e5df59dd36d1f6ea40dcc378cb507b8b38614a7dd1d6b/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/a89139161b9389850b29349524ce397c2ff057e71e5ca610a6995e559135bf92/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/f10f00811283dd5d5cdc1d96c72dfa042cd2f87bc8e322f518ade9fd0f8cd550/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/6352b84e38d59386c15fa523f71d95f83a2c8ac87d20afca345c5db9ad9dac54/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/e28d754e1fdc37f55858ff407bfb1703651a3d88ed5342c724118968e7923961/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/43b322f171e7409d6f856ca488792929358e184e572b955367edbca7ccefca78/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/ae6a5430fef0ae036fab54e4b9777379346760e49756e2d30634a23d7b1dad5d/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/c3ccc73e2b4b1bcfba0f030f594bdfb9add625fd502ff554de1bd4055660b662/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/7a8ee2e82f3052ee05a40f58f96172cba12cae79bd4971a5558f1d75df2f2279/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/39ec76d835f155caae5658f5c33b0b7d480baac0c41d0daaa02db92c0928e59c/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/9cfc3b21a7831509e5d45199f9e6bfd07b79fdb39e7e71b560508dcbd1a86598/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/2daf220bfb239dfc9e7cd9fc71a1226ef6ba5e69b5e7eebf81b8c5553c13d73a/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/3257de3ede3f280cd8fdf666a96d70d0bea2dfeb4842d7f35f6ff25e207148ac/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/28432a0e3a66258472248f774d29198a74c1da28098d5f3cfb5154c4034352ab/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/7e976815ec2e6d0fb873abe1e8bec1b6264cc147af2eaaf4750fd4c69939225c/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/8ca3f755ed31cc6594cda27bdc371d7394e567b596bce40c93de1b3c769a9a34/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/207afecbf59754bae45fe95195fffe73ab3db4eafc28088883a778966974580d/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/7e471da4a256fdeae870f919e4646e047280b36d362bc7a8d20499c6e1fa168d/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/00caad992e5ac34d0800b011cb9481bdb7c19563a108ec4893e5950636274b57/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/fa824e0f842578d53ed8523e4a5c8cafe112c52ae834522b9c01e3611fad9cb9/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/319dfbfaeab2ace5640de4d398909ea9f70264a7892ea9f261407c55527872ab/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/275932e6889f8690e23de43d54545917e929adccc315b3faf6fc2ce389cc2ef8/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/04f5fced49a7f43dbe9c27fff30f55c1fd1cbfbbde0a3eee98f6f8898d1a2139/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/5fd276bcf8d7735212f13ecf7ac72200508f1d30ad2cc96dd12e0ba03de04068/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/ff96a3e1fd3e64b22c66711cd7e37a5deb32b0d06f6d342f5ef81db713d9205b/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/33b07772c864851b30195feb4ca9daba4bfc722d27dcdfecff74245f76cb2892/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/531419217ceb4dd3e4ab6f86b9083ad6656c5b545af771e06bfa1276390575f9/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/9ffe80d92c6b630a8ec810731948fc9c8981b5921f4e4902479b4aa7a00da56e/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/9a1e38d1e244c6e9c48b9eb26462940730056dd8c91d640e9ecabcca2318e6b2/merged major:0 minor:83 fsType:overlay blockSize:0}] Mar 12 20:49:47.644540 master-0 kubenswrapper[7484]: I0312 20:49:47.643744 7484 manager.go:217] Machine: {Timestamp:2026-03-12 20:49:47.642559417 +0000 UTC m=+0.127828249 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ab6ae3a9e07f4bbcb7f4f9a490c6dc9c SystemUUID:ab6ae3a9-e07f-4bbc-b7f4-f9a490c6dc9c BootID:a78965b5-30ee-4294-b02c-530634422611 Filesystems:[{Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70e54b24-bf9d-42a8-b012-c7b073c6f6a6/volumes/kubernetes.io~projected/kube-api-access-mfsvw DeviceMajor:0 DeviceMinor:94 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~projected/kube-api-access-2lltk DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/kube-api-access-rvkp7 DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~projected/kube-api-access-2w68c DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2604b035-853c-42b7-a562-07d46178868a/volumes/kubernetes.io~projected/kube-api-access-clp9l DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/617f0f9c-50d5-4214-b30f-5110fd4399ec/volumes/kubernetes.io~projected/kube-api-access-f2r2r DeviceMajor:0 DeviceMinor:252 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~projected/kube-api-access-258hz DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~projected/kube-api-access-2wt5q DeviceMajor:0 DeviceMinor:264 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~projected/kube-api-access-f7rrv DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~projected/kube-api-access-c5c6t DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:251 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:265 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~projected/kube-api-access-2kng9 DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~projected/kube-api-access-z9xld DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1a307172-f010-4bad-a3fc-31607574b069/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:99 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8/volumes/kubernetes.io~projected/kube-api-access-7bk7q DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4/userdata/shm DeviceMajor:0 DeviceMinor:288 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de/userdata/shm DeviceMajor:0 DeviceMinor:242 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~projected/kube-api-access-mbbc5 DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~projected/kube-api-access-zlch7 DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~projected/kube-api-access-6j7lq DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:268 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~projected/kube-api-access-jx64q DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~projected/kube-api-access-8rjm8 DeviceMajor:0 DeviceMinor:139 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~projected/kube-api-access-jrk7w DeviceMajor:0 DeviceMinor:127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/kube-api-access-8vvf6 DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~projected/kube-api-access-k5v9f DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~projected/kube-api-access-q78vj DeviceMajor:0 DeviceMinor:250 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~projected/kube-api-access-b9z6l DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83/userdata/shm DeviceMajor:0 DeviceMinor:235 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:255 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:256 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~projected/kube-api-access-577p4 DeviceMajor:0 DeviceMinor:257 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~projected/kube-api-access-kzwrw DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f8f4400c-474c-480f-b46c-cf7c80555004/volumes/kubernetes.io~projected/kube-api-access-vjh5f DeviceMajor:0 DeviceMinor:239 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~projected/kube-api-access-bhcsd DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1390b30c39ad637 MacAddress:da:52:5e:76:9a:8e Speed:10000 Mtu:8900} {Name:2ab45bc6351d4ec MacAddress:be:9d:58:25:25:da Speed:10000 Mtu:8900} {Name:480ecceaa13fbfe MacAddress:de:75:a3:66:7b:75 Speed:10000 Mtu:8900} {Name:58853bb7c55e4f3 MacAddress:ca:70:70:0a:a1:c4 Speed:10000 Mtu:8900} {Name:823ddb02eb52a72 MacAddress:6e:e2:5e:ac:95:7d Speed:10000 Mtu:8900} {Name:82c567fab92f73c MacAddress:a2:52:1d:d1:e2:e4 Speed:10000 Mtu:8900} {Name:97b35cbaeb5726d MacAddress:f6:0a:4b:e5:f8:15 Speed:10000 Mtu:8900} {Name:a5615eeaf32fd2c MacAddress:76:cd:ed:9b:fb:c6 Speed:10000 Mtu:8900} {Name:ab3264a789b92ca MacAddress:76:80:f1:c8:96:64 Speed:10000 Mtu:8900} {Name:b6f3e501ba06ed9 MacAddress:9e:de:c2:35:2f:68 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:22:ba:f5:f1:59:96 Speed:0 Mtu:8900} {Name:c4103685c4d0722 MacAddress:fa:56:a0:6e:22:8c Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:f6:7e:a8 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:36:1f:bb Speed:-1 Mtu:9000} {Name:f3a6366fc7a8173 MacAddress:0e:87:56:2a:26:eb Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:c6:09:84:5c:c2:5e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 20:49:47.644540 master-0 kubenswrapper[7484]: I0312 20:49:47.644507 7484 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 20:49:47.645111 master-0 kubenswrapper[7484]: I0312 20:49:47.644750 7484 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 20:49:47.645375 master-0 kubenswrapper[7484]: I0312 20:49:47.645319 7484 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 20:49:47.645703 master-0 kubenswrapper[7484]: I0312 20:49:47.645630 7484 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 20:49:47.646058 master-0 kubenswrapper[7484]: I0312 20:49:47.645700 7484 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 20:49:47.646232 master-0 kubenswrapper[7484]: I0312 20:49:47.646097 7484 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 20:49:47.646232 master-0 kubenswrapper[7484]: I0312 20:49:47.646115 7484 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 20:49:47.646232 master-0 kubenswrapper[7484]: I0312 20:49:47.646130 7484 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 20:49:47.646232 master-0 kubenswrapper[7484]: I0312 20:49:47.646170 7484 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 20:49:47.646439 master-0 kubenswrapper[7484]: I0312 20:49:47.646407 7484 state_mem.go:36] "Initialized new in-memory state store" Mar 12 20:49:47.646600 master-0 kubenswrapper[7484]: I0312 20:49:47.646568 7484 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 20:49:47.646696 master-0 kubenswrapper[7484]: I0312 20:49:47.646667 7484 kubelet.go:418] "Attempting to sync node with API server" Mar 12 20:49:47.646745 master-0 kubenswrapper[7484]: I0312 20:49:47.646696 7484 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 20:49:47.646745 master-0 kubenswrapper[7484]: I0312 20:49:47.646721 7484 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 20:49:47.646745 master-0 kubenswrapper[7484]: I0312 20:49:47.646742 7484 kubelet.go:324] "Adding apiserver pod source" Mar 12 20:49:47.646906 master-0 kubenswrapper[7484]: I0312 20:49:47.646760 7484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 20:49:47.651759 master-0 kubenswrapper[7484]: I0312 20:49:47.651692 7484 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 20:49:47.652091 master-0 kubenswrapper[7484]: I0312 20:49:47.652051 7484 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 20:49:47.652647 master-0 kubenswrapper[7484]: I0312 20:49:47.652610 7484 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 20:49:47.652873 master-0 kubenswrapper[7484]: I0312 20:49:47.652838 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 20:49:47.652922 master-0 kubenswrapper[7484]: I0312 20:49:47.652880 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 20:49:47.652922 master-0 kubenswrapper[7484]: I0312 20:49:47.652896 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 20:49:47.652922 master-0 kubenswrapper[7484]: I0312 20:49:47.652909 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 20:49:47.652922 master-0 kubenswrapper[7484]: I0312 20:49:47.652921 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.652934 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.652948 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.652960 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.652976 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.652990 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.653007 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.653030 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 20:49:47.653103 master-0 kubenswrapper[7484]: I0312 20:49:47.653072 7484 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 20:49:47.653757 master-0 kubenswrapper[7484]: I0312 20:49:47.653717 7484 server.go:1280] "Started kubelet" Mar 12 20:49:47.656727 master-0 kubenswrapper[7484]: I0312 20:49:47.656274 7484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 20:49:47.656727 master-0 kubenswrapper[7484]: I0312 20:49:47.656486 7484 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 20:49:47.656991 master-0 kubenswrapper[7484]: I0312 20:49:47.656757 7484 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 20:49:47.657201 master-0 kubenswrapper[7484]: I0312 20:49:47.657158 7484 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 20:49:47.657964 master-0 kubenswrapper[7484]: I0312 20:49:47.657912 7484 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.660165 7484 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.660285 7484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.660322 7484 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 14:54:34.447741816 +0000 UTC Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.660400 7484 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h4m46.787343795s for next certificate rotation Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.660643 7484 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.660681 7484 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.661017 7484 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662017 7484 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662201 7484 factory.go:153] Registering CRI-O factory Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662235 7484 factory.go:221] Registration of the crio container factory successfully Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662331 7484 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662342 7484 factory.go:55] Registering systemd factory Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662351 7484 factory.go:221] Registration of the systemd container factory successfully Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662384 7484 factory.go:103] Registering Raw factory Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662412 7484 manager.go:1196] Started watching for new ooms in manager Mar 12 20:49:47.662966 master-0 kubenswrapper[7484]: I0312 20:49:47.662786 7484 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 20:49:47.660710 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 20:49:47.663921 master-0 kubenswrapper[7484]: I0312 20:49:47.663798 7484 manager.go:319] Starting recovery of all containers Mar 12 20:49:47.664337 master-0 kubenswrapper[7484]: I0312 20:49:47.664307 7484 server.go:449] "Adding debug handlers to kubelet server" Mar 12 20:49:47.678855 master-0 kubenswrapper[7484]: I0312 20:49:47.678738 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02649264-040a-41a6-9a41-8bf6416c68ff" volumeName="kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f" seLinuxMountContext="" Mar 12 20:49:47.679023 master-0 kubenswrapper[7484]: I0312 20:49:47.679001 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15ebfbd8-0782-431a-88a3-83af328498d2" volumeName="kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.679120 master-0 kubenswrapper[7484]: I0312 20:49:47.679103 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token" seLinuxMountContext="" Mar 12 20:49:47.679799 master-0 kubenswrapper[7484]: I0312 20:49:47.679771 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides" seLinuxMountContext="" Mar 12 20:49:47.679959 master-0 kubenswrapper[7484]: I0312 20:49:47.679941 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8660437-633f-4132-8a61-fe998abb493e" volumeName="kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7" seLinuxMountContext="" Mar 12 20:49:47.680050 master-0 kubenswrapper[7484]: I0312 20:49:47.680035 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96bd86df-2101-47f5-844b-1332261c66f1" volumeName="kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access" seLinuxMountContext="" Mar 12 20:49:47.680156 master-0 kubenswrapper[7484]: I0312 20:49:47.680136 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980191fe-c62c-4b9e-879c-38fa8ce0a58b" volumeName="kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.680253 master-0 kubenswrapper[7484]: I0312 20:49:47.680236 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle" seLinuxMountContext="" Mar 12 20:49:47.680343 master-0 kubenswrapper[7484]: I0312 20:49:47.680328 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" volumeName="kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9" seLinuxMountContext="" Mar 12 20:49:47.680431 master-0 kubenswrapper[7484]: I0312 20:49:47.680416 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides" seLinuxMountContext="" Mar 12 20:49:47.680508 master-0 kubenswrapper[7484]: I0312 20:49:47.680492 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca" seLinuxMountContext="" Mar 12 20:49:47.680620 master-0 kubenswrapper[7484]: I0312 20:49:47.680597 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e54b24-bf9d-42a8-b012-c7b073c6f6a6" volumeName="kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw" seLinuxMountContext="" Mar 12 20:49:47.680734 master-0 kubenswrapper[7484]: I0312 20:49:47.680711 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="226cb3a1-984f-4410-96e6-c007131dc074" volumeName="kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l" seLinuxMountContext="" Mar 12 20:49:47.680880 master-0 kubenswrapper[7484]: I0312 20:49:47.680861 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca" seLinuxMountContext="" Mar 12 20:49:47.681002 master-0 kubenswrapper[7484]: I0312 20:49:47.680960 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8" seLinuxMountContext="" Mar 12 20:49:47.681089 master-0 kubenswrapper[7484]: I0312 20:49:47.681073 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54184647-6e9a-43f7-90b1-5d8815f8b1ab" volumeName="kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw" seLinuxMountContext="" Mar 12 20:49:47.681175 master-0 kubenswrapper[7484]: I0312 20:49:47.681158 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q" seLinuxMountContext="" Mar 12 20:49:47.681259 master-0 kubenswrapper[7484]: I0312 20:49:47.681243 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.681335 master-0 kubenswrapper[7484]: I0312 20:49:47.681320 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 20:49:47.681415 master-0 kubenswrapper[7484]: I0312 20:49:47.681400 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 20:49:47.681501 master-0 kubenswrapper[7484]: I0312 20:49:47.681486 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e624e623-6d59-444d-b548-165fa5fd2581" volumeName="kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 20:49:47.681582 master-0 kubenswrapper[7484]: I0312 20:49:47.681567 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02649264-040a-41a6-9a41-8bf6416c68ff" volumeName="kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config" seLinuxMountContext="" Mar 12 20:49:47.681667 master-0 kubenswrapper[7484]: I0312 20:49:47.681652 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a307172-f010-4bad-a3fc-31607574b069" volumeName="kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca" seLinuxMountContext="" Mar 12 20:49:47.681765 master-0 kubenswrapper[7484]: I0312 20:49:47.681743 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config" seLinuxMountContext="" Mar 12 20:49:47.681906 master-0 kubenswrapper[7484]: I0312 20:49:47.681885 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96bd86df-2101-47f5-844b-1332261c66f1" volumeName="kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.682987 master-0 kubenswrapper[7484]: I0312 20:49:47.682964 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98d99166-c42a-4169-87e8-4209570aec50" volumeName="kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz" seLinuxMountContext="" Mar 12 20:49:47.683105 master-0 kubenswrapper[7484]: I0312 20:49:47.683089 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="617f0f9c-50d5-4214-b30f-5110fd4399ec" volumeName="kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r" seLinuxMountContext="" Mar 12 20:49:47.683194 master-0 kubenswrapper[7484]: I0312 20:49:47.683178 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" volumeName="kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.683275 master-0 kubenswrapper[7484]: I0312 20:49:47.683260 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config" seLinuxMountContext="" Mar 12 20:49:47.683353 master-0 kubenswrapper[7484]: I0312 20:49:47.683339 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07542516-49c8-4e20-9b97-798fbff850a5" volumeName="kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.683434 master-0 kubenswrapper[7484]: I0312 20:49:47.683420 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 20:49:47.683549 master-0 kubenswrapper[7484]: I0312 20:49:47.683505 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784599a3-a2ac-46ac-a4b7-9439704646cc" volumeName="kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.683637 master-0 kubenswrapper[7484]: I0312 20:49:47.683622 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca" seLinuxMountContext="" Mar 12 20:49:47.683722 master-0 kubenswrapper[7484]: I0312 20:49:47.683702 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" volumeName="kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.684857 master-0 kubenswrapper[7484]: I0312 20:49:47.684834 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle" seLinuxMountContext="" Mar 12 20:49:47.685139 master-0 kubenswrapper[7484]: I0312 20:49:47.685120 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token" seLinuxMountContext="" Mar 12 20:49:47.685252 master-0 kubenswrapper[7484]: I0312 20:49:47.685235 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980191fe-c62c-4b9e-879c-38fa8ce0a58b" volumeName="kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates" seLinuxMountContext="" Mar 12 20:49:47.685345 master-0 kubenswrapper[7484]: I0312 20:49:47.685329 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" volumeName="kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk" seLinuxMountContext="" Mar 12 20:49:47.685423 master-0 kubenswrapper[7484]: I0312 20:49:47.685407 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy" seLinuxMountContext="" Mar 12 20:49:47.685506 master-0 kubenswrapper[7484]: I0312 20:49:47.685491 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e624e623-6d59-444d-b548-165fa5fd2581" volumeName="kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t" seLinuxMountContext="" Mar 12 20:49:47.685591 master-0 kubenswrapper[7484]: I0312 20:49:47.685576 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15ebfbd8-0782-431a-88a3-83af328498d2" volumeName="kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config" seLinuxMountContext="" Mar 12 20:49:47.685675 master-0 kubenswrapper[7484]: I0312 20:49:47.685659 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e54b24-bf9d-42a8-b012-c7b073c6f6a6" volumeName="kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy" seLinuxMountContext="" Mar 12 20:49:47.685756 master-0 kubenswrapper[7484]: I0312 20:49:47.685741 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" volumeName="kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca" seLinuxMountContext="" Mar 12 20:49:47.685868 master-0 kubenswrapper[7484]: I0312 20:49:47.685849 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" volumeName="kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config" seLinuxMountContext="" Mar 12 20:49:47.686031 master-0 kubenswrapper[7484]: I0312 20:49:47.685967 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib" seLinuxMountContext="" Mar 12 20:49:47.686082 master-0 kubenswrapper[7484]: I0312 20:49:47.686041 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" volumeName="kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls" seLinuxMountContext="" Mar 12 20:49:47.686082 master-0 kubenswrapper[7484]: I0312 20:49:47.686071 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784599a3-a2ac-46ac-a4b7-9439704646cc" volumeName="kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access" seLinuxMountContext="" Mar 12 20:49:47.686163 master-0 kubenswrapper[7484]: I0312 20:49:47.686102 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="855747e5-d9b4-4eef-8bc4-425d6a8e95c7" volumeName="kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq" seLinuxMountContext="" Mar 12 20:49:47.686163 master-0 kubenswrapper[7484]: I0312 20:49:47.686129 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980191fe-c62c-4b9e-879c-38fa8ce0a58b" volumeName="kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q" seLinuxMountContext="" Mar 12 20:49:47.686163 master-0 kubenswrapper[7484]: I0312 20:49:47.686152 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07330030-487d-4fa6-b5c3-67607355bbba" volumeName="kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd" seLinuxMountContext="" Mar 12 20:49:47.686277 master-0 kubenswrapper[7484]: I0312 20:49:47.686174 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15ebfbd8-0782-431a-88a3-83af328498d2" volumeName="kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5" seLinuxMountContext="" Mar 12 20:49:47.686277 master-0 kubenswrapper[7484]: I0312 20:49:47.686197 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="617f0f9c-50d5-4214-b30f-5110fd4399ec" volumeName="kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script" seLinuxMountContext="" Mar 12 20:49:47.686277 master-0 kubenswrapper[7484]: I0312 20:49:47.686235 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e54b24-bf9d-42a8-b012-c7b073c6f6a6" volumeName="kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config" seLinuxMountContext="" Mar 12 20:49:47.686277 master-0 kubenswrapper[7484]: I0312 20:49:47.686264 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96bd86df-2101-47f5-844b-1332261c66f1" volumeName="kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config" seLinuxMountContext="" Mar 12 20:49:47.686432 master-0 kubenswrapper[7484]: I0312 20:49:47.686294 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config" seLinuxMountContext="" Mar 12 20:49:47.686432 master-0 kubenswrapper[7484]: I0312 20:49:47.686320 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q" seLinuxMountContext="" Mar 12 20:49:47.686432 master-0 kubenswrapper[7484]: I0312 20:49:47.686344 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a307172-f010-4bad-a3fc-31607574b069" volumeName="kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access" seLinuxMountContext="" Mar 12 20:49:47.686432 master-0 kubenswrapper[7484]: I0312 20:49:47.686366 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="226cb3a1-984f-4410-96e6-c007131dc074" volumeName="kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.686432 master-0 kubenswrapper[7484]: I0312 20:49:47.686392 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv" seLinuxMountContext="" Mar 12 20:49:47.686432 master-0 kubenswrapper[7484]: I0312 20:49:47.686420 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686448 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686476 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2604b035-853c-42b7-a562-07d46178868a" volumeName="kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686503 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" volumeName="kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686531 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" volumeName="kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686564 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686591 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" volumeName="kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config" seLinuxMountContext="" Mar 12 20:49:47.686641 master-0 kubenswrapper[7484]: I0312 20:49:47.686631 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07542516-49c8-4e20-9b97-798fbff850a5" volumeName="kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686657 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="226cb3a1-984f-4410-96e6-c007131dc074" volumeName="kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686680 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" volumeName="kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686703 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686725 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686781 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07542516-49c8-4e20-9b97-798fbff850a5" volumeName="kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686837 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686860 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686883 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" volumeName="kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686911 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686939 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w" seLinuxMountContext="" Mar 12 20:49:47.686966 master-0 kubenswrapper[7484]: I0312 20:49:47.686963 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides" seLinuxMountContext="" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.686994 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8f4400c-474c-480f-b46c-cf7c80555004" volumeName="kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f" seLinuxMountContext="" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.687026 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6" seLinuxMountContext="" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.687052 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert" seLinuxMountContext="" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.687077 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" volumeName="kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert" seLinuxMountContext="" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.687105 7484 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784599a3-a2ac-46ac-a4b7-9439704646cc" volumeName="kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config" seLinuxMountContext="" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.687135 7484 reconstruct.go:97] "Volume reconstruction finished" Mar 12 20:49:47.687436 master-0 kubenswrapper[7484]: I0312 20:49:47.687155 7484 reconciler.go:26] "Reconciler: start to sync state" Mar 12 20:49:47.690639 master-0 kubenswrapper[7484]: I0312 20:49:47.690613 7484 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 20:49:47.729386 master-0 kubenswrapper[7484]: I0312 20:49:47.729277 7484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 20:49:47.732095 master-0 kubenswrapper[7484]: I0312 20:49:47.732049 7484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 20:49:47.732163 master-0 kubenswrapper[7484]: I0312 20:49:47.732126 7484 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 20:49:47.732227 master-0 kubenswrapper[7484]: I0312 20:49:47.732163 7484 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 20:49:47.732268 master-0 kubenswrapper[7484]: E0312 20:49:47.732241 7484 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 20:49:47.734818 master-0 kubenswrapper[7484]: I0312 20:49:47.734769 7484 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 20:49:47.761128 master-0 kubenswrapper[7484]: I0312 20:49:47.760849 7484 generic.go:334] "Generic (PLEG): container finished" podID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerID="53c0edcd8673398e4384f928bbaa2737b8e228fa73c0aad115798fc1550e14b6" exitCode=0 Mar 12 20:49:47.781011 master-0 kubenswrapper[7484]: I0312 20:49:47.780938 7484 generic.go:334] "Generic (PLEG): container finished" podID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerID="2782822a08b1aa7b74a8813bdda6c24b76842bfecde841229b05dc04dcc388f3" exitCode=0 Mar 12 20:49:47.795206 master-0 kubenswrapper[7484]: I0312 20:49:47.795152 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 20:49:47.795662 master-0 kubenswrapper[7484]: I0312 20:49:47.795610 7484 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e" exitCode=1 Mar 12 20:49:47.795662 master-0 kubenswrapper[7484]: I0312 20:49:47.795659 7484 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="5aa72aa1d101c59af48adafd81202e715494ce655baaeb5ca917a23de1012db8" exitCode=0 Mar 12 20:49:47.806612 master-0 kubenswrapper[7484]: I0312 20:49:47.805198 7484 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="30bcb0d2fdcb56e224f2a443567cf3f56d89a253adb3d5c2682e4fce2aac1458" exitCode=0 Mar 12 20:49:47.812991 master-0 kubenswrapper[7484]: I0312 20:49:47.812935 7484 generic.go:334] "Generic (PLEG): container finished" podID="c3daeefa-7842-464c-a6c9-01b44ebea477" containerID="29a66354284f4876d7830823c349cadde817f41becb6c2b46ab19ae09fa84f0c" exitCode=0 Mar 12 20:49:47.817501 master-0 kubenswrapper[7484]: I0312 20:49:47.817455 7484 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="dff388636097d32c6363bd0b2483f1d9c5210a858615e76eaa57853e4405a2b0" exitCode=0 Mar 12 20:49:47.817501 master-0 kubenswrapper[7484]: I0312 20:49:47.817496 7484 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="583c873e3d835c6e05c94172cd7043791e47625e0cc941a8a498c15d7dcde4e3" exitCode=0 Mar 12 20:49:47.817653 master-0 kubenswrapper[7484]: I0312 20:49:47.817506 7484 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="ba582835d70280ab686cd92c06c36d3f8c1b51d4a50b6f6d872889ebb52af604" exitCode=0 Mar 12 20:49:47.817653 master-0 kubenswrapper[7484]: I0312 20:49:47.817515 7484 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="f5be33e5e1cb19154b4137bf5e307d01b21c816569a4f493dfb02ba284a02c43" exitCode=0 Mar 12 20:49:47.817653 master-0 kubenswrapper[7484]: I0312 20:49:47.817524 7484 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="4ffd6f14ac61ffabe5bcfc6578f791f07638af2dede3fe79398a339525e37d25" exitCode=0 Mar 12 20:49:47.817653 master-0 kubenswrapper[7484]: I0312 20:49:47.817531 7484 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="f1489aa28f1df9edd0eec54c9b66a8a7e1d73e8d6be27d02b6cab3f145aeea26" exitCode=0 Mar 12 20:49:47.827027 master-0 kubenswrapper[7484]: I0312 20:49:47.826985 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3" exitCode=1 Mar 12 20:49:47.832565 master-0 kubenswrapper[7484]: E0312 20:49:47.832336 7484 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 20:49:47.854830 master-0 kubenswrapper[7484]: I0312 20:49:47.854769 7484 manager.go:324] Recovery completed Mar 12 20:49:47.892423 master-0 kubenswrapper[7484]: I0312 20:49:47.892350 7484 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 20:49:47.892423 master-0 kubenswrapper[7484]: I0312 20:49:47.892402 7484 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 20:49:47.892658 master-0 kubenswrapper[7484]: I0312 20:49:47.892473 7484 state_mem.go:36] "Initialized new in-memory state store" Mar 12 20:49:47.892853 master-0 kubenswrapper[7484]: I0312 20:49:47.892795 7484 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 20:49:47.892889 master-0 kubenswrapper[7484]: I0312 20:49:47.892843 7484 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 20:49:47.892889 master-0 kubenswrapper[7484]: I0312 20:49:47.892872 7484 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 12 20:49:47.892889 master-0 kubenswrapper[7484]: I0312 20:49:47.892879 7484 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 12 20:49:47.892889 master-0 kubenswrapper[7484]: I0312 20:49:47.892886 7484 policy_none.go:49] "None policy: Start" Mar 12 20:49:47.898461 master-0 kubenswrapper[7484]: I0312 20:49:47.898425 7484 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 20:49:47.898522 master-0 kubenswrapper[7484]: I0312 20:49:47.898481 7484 state_mem.go:35] "Initializing new in-memory state store" Mar 12 20:49:47.898951 master-0 kubenswrapper[7484]: I0312 20:49:47.898932 7484 state_mem.go:75] "Updated machine memory state" Mar 12 20:49:47.898951 master-0 kubenswrapper[7484]: I0312 20:49:47.898947 7484 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 12 20:49:47.923051 master-0 kubenswrapper[7484]: I0312 20:49:47.922908 7484 manager.go:334] "Starting Device Plugin manager" Mar 12 20:49:47.923051 master-0 kubenswrapper[7484]: I0312 20:49:47.922982 7484 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 20:49:47.923051 master-0 kubenswrapper[7484]: I0312 20:49:47.923007 7484 server.go:79] "Starting device plugin registration server" Mar 12 20:49:47.923660 master-0 kubenswrapper[7484]: I0312 20:49:47.923629 7484 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 20:49:47.923903 master-0 kubenswrapper[7484]: I0312 20:49:47.923666 7484 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 20:49:47.924070 master-0 kubenswrapper[7484]: I0312 20:49:47.924037 7484 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 20:49:47.924204 master-0 kubenswrapper[7484]: I0312 20:49:47.924177 7484 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 20:49:47.924204 master-0 kubenswrapper[7484]: I0312 20:49:47.924200 7484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 20:49:48.024887 master-0 kubenswrapper[7484]: I0312 20:49:48.024798 7484 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 20:49:48.027878 master-0 kubenswrapper[7484]: I0312 20:49:48.027847 7484 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 20:49:48.027964 master-0 kubenswrapper[7484]: I0312 20:49:48.027953 7484 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 20:49:48.028033 master-0 kubenswrapper[7484]: I0312 20:49:48.028024 7484 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 20:49:48.028190 master-0 kubenswrapper[7484]: I0312 20:49:48.028180 7484 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 20:49:48.033669 master-0 kubenswrapper[7484]: I0312 20:49:48.033577 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 20:49:48.035560 master-0 kubenswrapper[7484]: I0312 20:49:48.035123 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"dc7d8b29ebb567785e771d22b9996a6a97141570cdafc6702bfef40b35ac45e8"} Mar 12 20:49:48.035616 master-0 kubenswrapper[7484]: I0312 20:49:48.035560 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"a980b97dcc609420950f26f74c5117d5a01a8f15aad34b4d8b39606d13541a42"} Mar 12 20:49:48.035616 master-0 kubenswrapper[7484]: I0312 20:49:48.035584 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efb65dddad13be04b474d4d401ef6dac8f4008861ce066cadd23656ae7ded22" Mar 12 20:49:48.035616 master-0 kubenswrapper[7484]: I0312 20:49:48.035604 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbde71f4d6a08e6432aff49678942efe1e239e2a38fc8d45e30b413ea5aea68e" Mar 12 20:49:48.035697 master-0 kubenswrapper[7484]: I0312 20:49:48.035646 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d" Mar 12 20:49:48.035697 master-0 kubenswrapper[7484]: I0312 20:49:48.035660 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63"} Mar 12 20:49:48.035697 master-0 kubenswrapper[7484]: I0312 20:49:48.035675 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272"} Mar 12 20:49:48.035697 master-0 kubenswrapper[7484]: I0312 20:49:48.035686 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"bcb1938b5b091e5043b0e5f8777ba9dca967bde96ecf2d35469ff9b727211cb7"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035701 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"6f5c19a3178e0ac81f6a0a19cf655238a7d3c02526a49af4ee450188873df923"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035713 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035724 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"5aa72aa1d101c59af48adafd81202e715494ce655baaeb5ca917a23de1012db8"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035735 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035748 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"293b592a6aebbbbed58da86d9dee8f9df9bbf7c626aca82c95e65d3a571789d2"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035764 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"0c4f41c6272feddd07ae16e6e9ba5929d190e5949f49ce16a888e464f3277bb3"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035775 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"30bcb0d2fdcb56e224f2a443567cf3f56d89a253adb3d5c2682e4fce2aac1458"} Mar 12 20:49:48.035795 master-0 kubenswrapper[7484]: I0312 20:49:48.035786 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b"} Mar 12 20:49:48.036092 master-0 kubenswrapper[7484]: I0312 20:49:48.035847 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf"} Mar 12 20:49:48.036092 master-0 kubenswrapper[7484]: I0312 20:49:48.035864 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c"} Mar 12 20:49:48.036092 master-0 kubenswrapper[7484]: I0312 20:49:48.035880 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3"} Mar 12 20:49:48.036092 master-0 kubenswrapper[7484]: I0312 20:49:48.035891 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"2a343ab165ef6275fd2082338584606fe4211638edf52ee8d11b7168b526ca52"} Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: E0312 20:49:48.050316 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: I0312 20:49:48.050441 7484 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: I0312 20:49:48.050568 7484 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: W0312 20:49:48.050693 7484 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: E0312 20:49:48.050750 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: E0312 20:49:48.050918 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.051419 master-0 kubenswrapper[7484]: E0312 20:49:48.051066 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.093313 master-0 kubenswrapper[7484]: I0312 20:49:48.093245 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.093493 master-0 kubenswrapper[7484]: I0312 20:49:48.093323 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.093493 master-0 kubenswrapper[7484]: I0312 20:49:48.093431 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.093493 master-0 kubenswrapper[7484]: I0312 20:49:48.093466 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.093615 master-0 kubenswrapper[7484]: I0312 20:49:48.093501 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.093615 master-0 kubenswrapper[7484]: I0312 20:49:48.093537 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.093615 master-0 kubenswrapper[7484]: I0312 20:49:48.093571 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.093615 master-0 kubenswrapper[7484]: I0312 20:49:48.093599 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.093726 master-0 kubenswrapper[7484]: I0312 20:49:48.093630 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.093726 master-0 kubenswrapper[7484]: I0312 20:49:48.093661 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.093726 master-0 kubenswrapper[7484]: I0312 20:49:48.093690 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.093833 master-0 kubenswrapper[7484]: I0312 20:49:48.093725 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.093905 master-0 kubenswrapper[7484]: I0312 20:49:48.093754 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.093947 master-0 kubenswrapper[7484]: I0312 20:49:48.093928 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.093980 master-0 kubenswrapper[7484]: I0312 20:49:48.093962 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.094011 master-0 kubenswrapper[7484]: I0312 20:49:48.093997 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.094135 master-0 kubenswrapper[7484]: I0312 20:49:48.094081 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.153079 master-0 kubenswrapper[7484]: E0312 20:49:48.152866 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194485 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194535 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194554 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194568 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194585 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194605 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194641 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194655 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.194782 master-0 kubenswrapper[7484]: I0312 20:49:48.194670 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.195168 master-0 kubenswrapper[7484]: I0312 20:49:48.194880 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.195246 master-0 kubenswrapper[7484]: I0312 20:49:48.195188 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.195297 master-0 kubenswrapper[7484]: I0312 20:49:48.195100 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.195332 master-0 kubenswrapper[7484]: I0312 20:49:48.195296 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.195362 master-0 kubenswrapper[7484]: I0312 20:49:48.195331 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.195398 master-0 kubenswrapper[7484]: I0312 20:49:48.195365 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195399 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195567 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195577 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195502 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195485 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195545 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195581 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195642 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195653 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.195691 master-0 kubenswrapper[7484]: I0312 20:49:48.195689 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195518 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195658 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195654 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195736 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195890 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195942 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.196011 master-0 kubenswrapper[7484]: I0312 20:49:48.195983 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.196189 master-0 kubenswrapper[7484]: I0312 20:49:48.196031 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:48.196189 master-0 kubenswrapper[7484]: I0312 20:49:48.196079 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:48.647974 master-0 kubenswrapper[7484]: I0312 20:49:48.647899 7484 apiserver.go:52] "Watching apiserver" Mar 12 20:49:48.661883 master-0 kubenswrapper[7484]: I0312 20:49:48.661792 7484 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 20:49:48.663005 master-0 kubenswrapper[7484]: I0312 20:49:48.662949 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-zsd76","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt","openshift-network-operator/iptables-alerter-krpjj","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf","openshift-dns-operator/dns-operator-589895fbb7-tvrxp","openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw","openshift-network-operator/network-operator-7c649bf6d4-62t2f","kube-system/bootstrap-kube-scheduler-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk","openshift-multus/multus-admission-controller-8d675b596-98j9w","openshift-multus/network-metrics-daemon-brdcd","openshift-network-diagnostics/network-check-target-h26wj","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj","openshift-network-node-identity/network-node-identity-48hk7","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk","openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4","openshift-ovn-kubernetes/ovnkube-node-nhrpd","assisted-installer/assisted-installer-controller-jffs8","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9","openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl","openshift-etcd/etcd-master-0-master-0","openshift-ingress-operator/ingress-operator-677db989d6-qpf68","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt","openshift-multus/multus-additional-cni-plugins-trlxw","openshift-multus/multus-gnmmm","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t","kube-system/bootstrap-kube-controller-manager-master-0","openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs"] Mar 12 20:49:48.663443 master-0 kubenswrapper[7484]: I0312 20:49:48.663330 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 20:49:48.663443 master-0 kubenswrapper[7484]: I0312 20:49:48.663390 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:48.664587 master-0 kubenswrapper[7484]: I0312 20:49:48.663483 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:48.664587 master-0 kubenswrapper[7484]: I0312 20:49:48.663573 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:48.664850 master-0 kubenswrapper[7484]: I0312 20:49:48.664720 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:48.665213 master-0 kubenswrapper[7484]: I0312 20:49:48.665184 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.665619 master-0 kubenswrapper[7484]: I0312 20:49:48.665288 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:48.665619 master-0 kubenswrapper[7484]: I0312 20:49:48.665343 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:48.666363 master-0 kubenswrapper[7484]: I0312 20:49:48.666008 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 20:49:48.666720 master-0 kubenswrapper[7484]: I0312 20:49:48.666658 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.667303 master-0 kubenswrapper[7484]: I0312 20:49:48.667267 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.667972 master-0 kubenswrapper[7484]: I0312 20:49:48.667572 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.667972 master-0 kubenswrapper[7484]: I0312 20:49:48.667760 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 20:49:48.668837 master-0 kubenswrapper[7484]: I0312 20:49:48.668567 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 20:49:48.688509 master-0 kubenswrapper[7484]: I0312 20:49:48.688437 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:48.689432 master-0 kubenswrapper[7484]: I0312 20:49:48.689381 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.689635 master-0 kubenswrapper[7484]: I0312 20:49:48.689606 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 20:49:48.689717 master-0 kubenswrapper[7484]: I0312 20:49:48.689678 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:48.690034 master-0 kubenswrapper[7484]: I0312 20:49:48.689996 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:48.690336 master-0 kubenswrapper[7484]: I0312 20:49:48.690295 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 20:49:48.701713 master-0 kubenswrapper[7484]: I0312 20:49:48.701399 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 20:49:48.701713 master-0 kubenswrapper[7484]: I0312 20:49:48.701578 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 20:49:48.701713 master-0 kubenswrapper[7484]: I0312 20:49:48.701408 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.701713 master-0 kubenswrapper[7484]: I0312 20:49:48.701644 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 20:49:48.701713 master-0 kubenswrapper[7484]: I0312 20:49:48.701626 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 20:49:48.701936 master-0 kubenswrapper[7484]: I0312 20:49:48.701911 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 20:49:48.702911 master-0 kubenswrapper[7484]: I0312 20:49:48.702697 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.702911 master-0 kubenswrapper[7484]: I0312 20:49:48.702930 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 20:49:48.703266 master-0 kubenswrapper[7484]: I0312 20:49:48.702951 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 20:49:48.703266 master-0 kubenswrapper[7484]: I0312 20:49:48.703061 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 20:49:48.703266 master-0 kubenswrapper[7484]: I0312 20:49:48.703072 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 20:49:48.703602 master-0 kubenswrapper[7484]: I0312 20:49:48.703571 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.703703 master-0 kubenswrapper[7484]: I0312 20:49:48.703659 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 20:49:48.703756 master-0 kubenswrapper[7484]: I0312 20:49:48.703685 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 20:49:48.703891 master-0 kubenswrapper[7484]: I0312 20:49:48.703828 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 20:49:48.703891 master-0 kubenswrapper[7484]: I0312 20:49:48.703851 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 20:49:48.703891 master-0 kubenswrapper[7484]: I0312 20:49:48.703866 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 20:49:48.704018 master-0 kubenswrapper[7484]: I0312 20:49:48.703968 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.704018 master-0 kubenswrapper[7484]: I0312 20:49:48.703984 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.704018 master-0 kubenswrapper[7484]: I0312 20:49:48.704016 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 20:49:48.704133 master-0 kubenswrapper[7484]: I0312 20:49:48.704076 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 20:49:48.704179 master-0 kubenswrapper[7484]: I0312 20:49:48.704150 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 20:49:48.704313 master-0 kubenswrapper[7484]: I0312 20:49:48.704287 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 20:49:48.704313 master-0 kubenswrapper[7484]: I0312 20:49:48.704307 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 20:49:48.704410 master-0 kubenswrapper[7484]: I0312 20:49:48.704387 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.704483 master-0 kubenswrapper[7484]: I0312 20:49:48.704458 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 20:49:48.704537 master-0 kubenswrapper[7484]: I0312 20:49:48.704512 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 20:49:48.704613 master-0 kubenswrapper[7484]: I0312 20:49:48.704589 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 20:49:48.704684 master-0 kubenswrapper[7484]: I0312 20:49:48.704662 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 20:49:48.704736 master-0 kubenswrapper[7484]: I0312 20:49:48.704704 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 20:49:48.704963 master-0 kubenswrapper[7484]: I0312 20:49:48.704929 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 20:49:48.705064 master-0 kubenswrapper[7484]: I0312 20:49:48.705030 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 20:49:48.705124 master-0 kubenswrapper[7484]: I0312 20:49:48.705112 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 20:49:48.705170 master-0 kubenswrapper[7484]: I0312 20:49:48.705152 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 20:49:48.705272 master-0 kubenswrapper[7484]: I0312 20:49:48.705241 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 20:49:48.705382 master-0 kubenswrapper[7484]: I0312 20:49:48.705309 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.705382 master-0 kubenswrapper[7484]: I0312 20:49:48.705345 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.705617 master-0 kubenswrapper[7484]: I0312 20:49:48.705399 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 20:49:48.705617 master-0 kubenswrapper[7484]: I0312 20:49:48.705448 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 20:49:48.705617 master-0 kubenswrapper[7484]: I0312 20:49:48.705527 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 20:49:48.705617 master-0 kubenswrapper[7484]: I0312 20:49:48.705563 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 20:49:48.705617 master-0 kubenswrapper[7484]: I0312 20:49:48.705606 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 20:49:48.705617 master-0 kubenswrapper[7484]: I0312 20:49:48.703980 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.705974 master-0 kubenswrapper[7484]: I0312 20:49:48.705639 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 20:49:48.705974 master-0 kubenswrapper[7484]: I0312 20:49:48.705774 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 20:49:48.705974 master-0 kubenswrapper[7484]: I0312 20:49:48.705786 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 20:49:48.705974 master-0 kubenswrapper[7484]: I0312 20:49:48.705798 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 20:49:48.705974 master-0 kubenswrapper[7484]: I0312 20:49:48.705857 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 20:49:48.705974 master-0 kubenswrapper[7484]: I0312 20:49:48.705925 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.706292 master-0 kubenswrapper[7484]: I0312 20:49:48.706026 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 20:49:48.706292 master-0 kubenswrapper[7484]: I0312 20:49:48.706189 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.706292 master-0 kubenswrapper[7484]: I0312 20:49:48.706193 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 20:49:48.706292 master-0 kubenswrapper[7484]: I0312 20:49:48.706243 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 20:49:48.706292 master-0 kubenswrapper[7484]: I0312 20:49:48.706296 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 20:49:48.706468 master-0 kubenswrapper[7484]: I0312 20:49:48.706444 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 20:49:48.706982 master-0 kubenswrapper[7484]: I0312 20:49:48.706946 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 20:49:48.707072 master-0 kubenswrapper[7484]: I0312 20:49:48.707041 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 20:49:48.708618 master-0 kubenswrapper[7484]: I0312 20:49:48.707170 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 20:49:48.708618 master-0 kubenswrapper[7484]: I0312 20:49:48.707270 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 20:49:48.708618 master-0 kubenswrapper[7484]: I0312 20:49:48.707771 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.709535 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.709553 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.711008 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.711156 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.711237 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.711312 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.711576 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.711891 master-0 kubenswrapper[7484]: I0312 20:49:48.711541 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 20:49:48.712224 master-0 kubenswrapper[7484]: I0312 20:49:48.712016 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:48.713591 master-0 kubenswrapper[7484]: I0312 20:49:48.713543 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 20:49:48.714452 master-0 kubenswrapper[7484]: I0312 20:49:48.714422 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 20:49:48.714602 master-0 kubenswrapper[7484]: I0312 20:49:48.714578 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 20:49:48.715318 master-0 kubenswrapper[7484]: I0312 20:49:48.715292 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 20:49:48.715535 master-0 kubenswrapper[7484]: I0312 20:49:48.715495 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 20:49:48.716217 master-0 kubenswrapper[7484]: I0312 20:49:48.716162 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 20:49:48.717050 master-0 kubenswrapper[7484]: I0312 20:49:48.717012 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 20:49:48.717260 master-0 kubenswrapper[7484]: I0312 20:49:48.717219 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 20:49:48.717512 master-0 kubenswrapper[7484]: I0312 20:49:48.717471 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 20:49:48.717634 master-0 kubenswrapper[7484]: I0312 20:49:48.717603 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 20:49:48.717634 master-0 kubenswrapper[7484]: I0312 20:49:48.717619 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 20:49:48.717722 master-0 kubenswrapper[7484]: I0312 20:49:48.717663 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 20:49:48.717722 master-0 kubenswrapper[7484]: I0312 20:49:48.717679 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 20:49:48.717899 master-0 kubenswrapper[7484]: I0312 20:49:48.717862 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 20:49:48.717899 master-0 kubenswrapper[7484]: I0312 20:49:48.717887 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 20:49:48.718528 master-0 kubenswrapper[7484]: I0312 20:49:48.718493 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 20:49:48.719123 master-0 kubenswrapper[7484]: I0312 20:49:48.719090 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 20:49:48.719409 master-0 kubenswrapper[7484]: I0312 20:49:48.719343 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 20:49:48.719556 master-0 kubenswrapper[7484]: I0312 20:49:48.719463 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 20:49:48.719645 master-0 kubenswrapper[7484]: I0312 20:49:48.719610 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 20:49:48.719973 master-0 kubenswrapper[7484]: I0312 20:49:48.719912 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 20:49:48.724549 master-0 kubenswrapper[7484]: I0312 20:49:48.723517 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 20:49:48.725634 master-0 kubenswrapper[7484]: I0312 20:49:48.725610 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 20:49:48.729519 master-0 kubenswrapper[7484]: I0312 20:49:48.729477 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 20:49:48.729573 master-0 kubenswrapper[7484]: I0312 20:49:48.729536 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 20:49:48.732608 master-0 kubenswrapper[7484]: I0312 20:49:48.732574 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 20:49:48.736135 master-0 kubenswrapper[7484]: I0312 20:49:48.736093 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 20:49:48.737693 master-0 kubenswrapper[7484]: I0312 20:49:48.737668 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 20:49:48.739197 master-0 kubenswrapper[7484]: I0312 20:49:48.739166 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 20:49:48.750265 master-0 kubenswrapper[7484]: I0312 20:49:48.750239 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 20:49:48.764551 master-0 kubenswrapper[7484]: I0312 20:49:48.764519 7484 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 20:49:48.771494 master-0 kubenswrapper[7484]: I0312 20:49:48.771453 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 20:49:48.802318 master-0 kubenswrapper[7484]: I0312 20:49:48.802261 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.802512 master-0 kubenswrapper[7484]: I0312 20:49:48.802327 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:48.802512 master-0 kubenswrapper[7484]: I0312 20:49:48.802370 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.802651 master-0 kubenswrapper[7484]: I0312 20:49:48.802577 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.802732 master-0 kubenswrapper[7484]: I0312 20:49:48.802681 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:48.802732 master-0 kubenswrapper[7484]: I0312 20:49:48.802711 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.802849 master-0 kubenswrapper[7484]: I0312 20:49:48.802743 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.802849 master-0 kubenswrapper[7484]: I0312 20:49:48.802774 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.802948 master-0 kubenswrapper[7484]: I0312 20:49:48.802881 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:48.802948 master-0 kubenswrapper[7484]: I0312 20:49:48.802921 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.803042 master-0 kubenswrapper[7484]: I0312 20:49:48.803009 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:48.803165 master-0 kubenswrapper[7484]: I0312 20:49:48.803106 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.803263 master-0 kubenswrapper[7484]: I0312 20:49:48.803219 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:48.803325 master-0 kubenswrapper[7484]: I0312 20:49:48.803294 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:48.803375 master-0 kubenswrapper[7484]: I0312 20:49:48.803333 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.803375 master-0 kubenswrapper[7484]: I0312 20:49:48.803363 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:48.803456 master-0 kubenswrapper[7484]: I0312 20:49:48.803389 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:48.803456 master-0 kubenswrapper[7484]: I0312 20:49:48.803416 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:48.803456 master-0 kubenswrapper[7484]: I0312 20:49:48.803444 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.803578 master-0 kubenswrapper[7484]: I0312 20:49:48.803468 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:48.803578 master-0 kubenswrapper[7484]: I0312 20:49:48.803495 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.803578 master-0 kubenswrapper[7484]: I0312 20:49:48.803526 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:48.803578 master-0 kubenswrapper[7484]: I0312 20:49:48.803551 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:48.803578 master-0 kubenswrapper[7484]: I0312 20:49:48.803549 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.803578 master-0 kubenswrapper[7484]: I0312 20:49:48.803576 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803601 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803626 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803650 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803679 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803706 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803736 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803762 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.803796 master-0 kubenswrapper[7484]: I0312 20:49:48.803786 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.803838 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.803870 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.803895 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.803919 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.803916 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.803966 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:48.804174 master-0 kubenswrapper[7484]: I0312 20:49:48.804020 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:48.804375 master-0 kubenswrapper[7484]: I0312 20:49:48.804181 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.804483 master-0 kubenswrapper[7484]: I0312 20:49:48.804435 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:48.804526 master-0 kubenswrapper[7484]: I0312 20:49:48.804458 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.804663 master-0 kubenswrapper[7484]: I0312 20:49:48.804638 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.804701 master-0 kubenswrapper[7484]: I0312 20:49:48.804673 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.804919 master-0 kubenswrapper[7484]: I0312 20:49:48.804890 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:48.805044 master-0 kubenswrapper[7484]: I0312 20:49:48.804978 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.805102 master-0 kubenswrapper[7484]: I0312 20:49:48.805016 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.805102 master-0 kubenswrapper[7484]: I0312 20:49:48.805021 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.805102 master-0 kubenswrapper[7484]: I0312 20:49:48.805097 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.805210 master-0 kubenswrapper[7484]: I0312 20:49:48.805065 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:48.805210 master-0 kubenswrapper[7484]: I0312 20:49:48.805128 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.805210 master-0 kubenswrapper[7484]: I0312 20:49:48.805166 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.805340 master-0 kubenswrapper[7484]: I0312 20:49:48.805273 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.805383 master-0 kubenswrapper[7484]: I0312 20:49:48.805361 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:48.805474 master-0 kubenswrapper[7484]: I0312 20:49:48.805433 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.805474 master-0 kubenswrapper[7484]: I0312 20:49:48.805464 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:48.805568 master-0 kubenswrapper[7484]: I0312 20:49:48.805468 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:48.805613 master-0 kubenswrapper[7484]: I0312 20:49:48.805569 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:48.805654 master-0 kubenswrapper[7484]: I0312 20:49:48.805610 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.805755 master-0 kubenswrapper[7484]: I0312 20:49:48.805708 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.805834 master-0 kubenswrapper[7484]: I0312 20:49:48.805745 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:48.805912 master-0 kubenswrapper[7484]: I0312 20:49:48.805868 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.805968 master-0 kubenswrapper[7484]: I0312 20:49:48.805936 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:48.806009 master-0 kubenswrapper[7484]: I0312 20:49:48.805992 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:48.806074 master-0 kubenswrapper[7484]: I0312 20:49:48.806057 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.806188 master-0 kubenswrapper[7484]: I0312 20:49:48.806161 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.806188 master-0 kubenswrapper[7484]: I0312 20:49:48.806187 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.806277 master-0 kubenswrapper[7484]: I0312 20:49:48.806212 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:48.806277 master-0 kubenswrapper[7484]: I0312 20:49:48.806234 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:48.806277 master-0 kubenswrapper[7484]: I0312 20:49:48.806255 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:48.806277 master-0 kubenswrapper[7484]: I0312 20:49:48.806275 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.806277 master-0 kubenswrapper[7484]: I0312 20:49:48.806294 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806336 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806431 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806466 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806488 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806497 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806517 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806520 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.806566 master-0 kubenswrapper[7484]: I0312 20:49:48.806554 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806616 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806669 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806677 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806725 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806834 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806867 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806888 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806908 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806929 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:48.807055 master-0 kubenswrapper[7484]: I0312 20:49:48.806949 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807083 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807086 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807118 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807182 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807221 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807255 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807281 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807302 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807321 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807342 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807364 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807381 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807376 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807401 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807483 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807524 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807583 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807609 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807702 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.807684 master-0 kubenswrapper[7484]: I0312 20:49:48.807671 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.807731 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.807864 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.807942 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.807980 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808006 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808024 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808112 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808289 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808315 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808333 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808371 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808409 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808449 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808487 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808533 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808548 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808540 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:48.808594 master-0 kubenswrapper[7484]: I0312 20:49:48.808568 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.808630 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.808682 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.808712 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.808742 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.808829 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.808971 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809017 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809034 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809089 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809127 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809183 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809246 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809305 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.809377 master-0 kubenswrapper[7484]: I0312 20:49:48.809367 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809404 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809478 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809443 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809501 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809529 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809603 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809600 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809647 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809671 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809713 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809870 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.810012 master-0 kubenswrapper[7484]: I0312 20:49:48.809954 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810095 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810167 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810222 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810293 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810364 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810408 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810423 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810486 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.810609 master-0 kubenswrapper[7484]: I0312 20:49:48.810556 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810623 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810685 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810704 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810740 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810835 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810838 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810904 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:48.810964 master-0 kubenswrapper[7484]: I0312 20:49:48.810932 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:48.811251 master-0 kubenswrapper[7484]: I0312 20:49:48.810965 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:48.811251 master-0 kubenswrapper[7484]: I0312 20:49:48.811033 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.811251 master-0 kubenswrapper[7484]: I0312 20:49:48.811079 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:48.811251 master-0 kubenswrapper[7484]: I0312 20:49:48.811109 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:48.811397 master-0 kubenswrapper[7484]: I0312 20:49:48.811283 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:48.811397 master-0 kubenswrapper[7484]: I0312 20:49:48.811297 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:48.811397 master-0 kubenswrapper[7484]: I0312 20:49:48.811336 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.811397 master-0 kubenswrapper[7484]: I0312 20:49:48.811365 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.811397 master-0 kubenswrapper[7484]: I0312 20:49:48.811389 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.811584 master-0 kubenswrapper[7484]: I0312 20:49:48.811416 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:48.811584 master-0 kubenswrapper[7484]: I0312 20:49:48.811386 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:48.811584 master-0 kubenswrapper[7484]: I0312 20:49:48.811491 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:48.845514 master-0 kubenswrapper[7484]: I0312 20:49:48.845450 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:48.864606 master-0 kubenswrapper[7484]: I0312 20:49:48.864555 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:48.885992 master-0 kubenswrapper[7484]: I0312 20:49:48.885918 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 20:49:48.912492 master-0 kubenswrapper[7484]: I0312 20:49:48.912337 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.912492 master-0 kubenswrapper[7484]: I0312 20:49:48.912516 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:48.912835 master-0 kubenswrapper[7484]: I0312 20:49:48.912560 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.912835 master-0 kubenswrapper[7484]: I0312 20:49:48.912623 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.912835 master-0 kubenswrapper[7484]: I0312 20:49:48.912697 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.912965 master-0 kubenswrapper[7484]: E0312 20:49:48.912845 7484 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:49:48.912965 master-0 kubenswrapper[7484]: I0312 20:49:48.912911 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.912965 master-0 kubenswrapper[7484]: E0312 20:49:48.912956 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.412931223 +0000 UTC m=+1.898200115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.912976 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913007 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913032 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913032 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913083 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913119 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913175 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: E0312 20:49:48.913180 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913197 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913227 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913239 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: E0312 20:49:48.913258 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.413232121 +0000 UTC m=+1.898500923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913276 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913282 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: E0312 20:49:48.913295 7484 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913315 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: I0312 20:49:48.913347 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: E0312 20:49:48.913350 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.413335494 +0000 UTC m=+1.898604536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:48.913445 master-0 kubenswrapper[7484]: E0312 20:49:48.913444 7484 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913478 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.913485 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.413471727 +0000 UTC m=+1.898740799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913554 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913613 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913693 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913725 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913757 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913826 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913836 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913872 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.913906 7484 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.913951 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.413937708 +0000 UTC m=+1.899206530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.913905 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914011 7484 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.914031 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914069 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.414053641 +0000 UTC m=+1.899322693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914080 7484 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914102 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.914102 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914129 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.414115582 +0000 UTC m=+1.899384604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914153 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.414144793 +0000 UTC m=+1.899413835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.914172 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914196 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.914228 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.914229 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: E0312 20:49:48.914248 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.414233425 +0000 UTC m=+1.899502407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:48.914234 master-0 kubenswrapper[7484]: I0312 20:49:48.914203 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914339 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914412 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914453 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: E0312 20:49:48.914475 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: E0312 20:49:48.914531 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.414519662 +0000 UTC m=+1.899788674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914590 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914642 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914682 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914725 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914740 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914774 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914840 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914861 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914910 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.914978 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: E0312 20:49:48.915031 7484 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: E0312 20:49:48.915066 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.415055986 +0000 UTC m=+1.900324788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915035 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915075 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915144 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915177 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915206 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915245 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.915318 master-0 kubenswrapper[7484]: I0312 20:49:48.915253 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915377 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915414 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915433 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915447 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915473 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915513 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915529 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915554 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915601 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915626 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915638 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915695 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: E0312 20:49:48.915778 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915827 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915855 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: E0312 20:49:48.915790 7484 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915889 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: E0312 20:49:48.915900 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.415868466 +0000 UTC m=+1.901137308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.915952 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: E0312 20:49:48.916006 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:49.41599102 +0000 UTC m=+1.901260062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916017 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916073 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916117 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916124 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916134 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916171 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916203 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916208 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916259 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916500 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916274 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916543 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916548 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916608 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916679 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.916665 master-0 kubenswrapper[7484]: I0312 20:49:48.916705 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.918480 master-0 kubenswrapper[7484]: I0312 20:49:48.916720 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:48.925226 master-0 kubenswrapper[7484]: I0312 20:49:48.925165 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 20:49:48.944275 master-0 kubenswrapper[7484]: I0312 20:49:48.944226 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 20:49:48.962369 master-0 kubenswrapper[7484]: I0312 20:49:48.962264 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 20:49:48.969363 master-0 kubenswrapper[7484]: I0312 20:49:48.969167 7484 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 20:49:48.995171 master-0 kubenswrapper[7484]: I0312 20:49:48.995100 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:49.010143 master-0 kubenswrapper[7484]: I0312 20:49:49.010079 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:49.030797 master-0 kubenswrapper[7484]: I0312 20:49:49.030746 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:49.042429 master-0 kubenswrapper[7484]: I0312 20:49:49.042303 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:49.065459 master-0 kubenswrapper[7484]: I0312 20:49:49.065414 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:49.082141 master-0 kubenswrapper[7484]: I0312 20:49:49.082090 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:49.101567 master-0 kubenswrapper[7484]: I0312 20:49:49.101506 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 20:49:49.137025 master-0 kubenswrapper[7484]: I0312 20:49:49.136955 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:49.142343 master-0 kubenswrapper[7484]: I0312 20:49:49.142295 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 20:49:49.154925 master-0 kubenswrapper[7484]: I0312 20:49:49.153961 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:49.169186 master-0 kubenswrapper[7484]: I0312 20:49:49.168985 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:49.182282 master-0 kubenswrapper[7484]: I0312 20:49:49.182233 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:49.203336 master-0 kubenswrapper[7484]: I0312 20:49:49.203246 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:49:49.228426 master-0 kubenswrapper[7484]: I0312 20:49:49.228368 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:49.241963 master-0 kubenswrapper[7484]: I0312 20:49:49.241922 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 20:49:49.261355 master-0 kubenswrapper[7484]: I0312 20:49:49.261308 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:49.285169 master-0 kubenswrapper[7484]: I0312 20:49:49.285113 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 20:49:49.301236 master-0 kubenswrapper[7484]: I0312 20:49:49.301192 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 20:49:49.324214 master-0 kubenswrapper[7484]: I0312 20:49:49.324117 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:49.349395 master-0 kubenswrapper[7484]: I0312 20:49:49.349326 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:49.364993 master-0 kubenswrapper[7484]: I0312 20:49:49.364936 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 20:49:49.394351 master-0 kubenswrapper[7484]: I0312 20:49:49.390907 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:49.426903 master-0 kubenswrapper[7484]: I0312 20:49:49.426106 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 20:49:49.426903 master-0 kubenswrapper[7484]: I0312 20:49:49.426484 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427354 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427399 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427441 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427468 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427499 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427520 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427540 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427563 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427600 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427627 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427647 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427665 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: I0312 20:49:49.427685 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.427827 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.427913 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.427857876 +0000 UTC m=+2.913126688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428542 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428572 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.428563544 +0000 UTC m=+2.913832346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428612 7484 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428634 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.428626986 +0000 UTC m=+2.913895788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428673 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428695 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.428688827 +0000 UTC m=+2.913957629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428733 7484 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428754 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.428747628 +0000 UTC m=+2.914016430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428792 7484 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428831 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.42882338 +0000 UTC m=+2.914092182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428872 7484 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428895 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.428886742 +0000 UTC m=+2.914155544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428932 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428952 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.428945573 +0000 UTC m=+2.914214375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.428990 7484 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429009 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.429003475 +0000 UTC m=+2.914272287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429156 7484 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429183 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.429173729 +0000 UTC m=+2.914442631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429226 7484 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429247 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.42924026 +0000 UTC m=+2.914509062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429283 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429302 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.429296072 +0000 UTC m=+2.914564874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429433 7484 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:49.432655 master-0 kubenswrapper[7484]: E0312 20:49:49.429464 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:50.429454896 +0000 UTC m=+2.914723698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:49.458838 master-0 kubenswrapper[7484]: I0312 20:49:49.454397 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 20:49:49.473432 master-0 kubenswrapper[7484]: I0312 20:49:49.473377 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 20:49:49.484742 master-0 kubenswrapper[7484]: I0312 20:49:49.484688 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 20:49:49.517459 master-0 kubenswrapper[7484]: E0312 20:49:49.517398 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 20:49:49.536103 master-0 kubenswrapper[7484]: W0312 20:49:49.536059 7484 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 12 20:49:49.536278 master-0 kubenswrapper[7484]: E0312 20:49:49.536125 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:49:49.558479 master-0 kubenswrapper[7484]: E0312 20:49:49.558324 7484 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" Mar 12 20:49:49.558749 master-0 kubenswrapper[7484]: E0312 20:49:49.558659 7484 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2r2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-krpjj_openshift-network-operator(617f0f9c-50d5-4214-b30f-5110fd4399ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 20:49:49.560070 master-0 kubenswrapper[7484]: E0312 20:49:49.559958 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-krpjj" podUID="617f0f9c-50d5-4214-b30f-5110fd4399ec" Mar 12 20:49:49.565932 master-0 kubenswrapper[7484]: E0312 20:49:49.564885 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:49.577849 master-0 kubenswrapper[7484]: E0312 20:49:49.577765 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:49.598514 master-0 kubenswrapper[7484]: E0312 20:49:49.598466 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 20:49:49.630981 master-0 kubenswrapper[7484]: I0312 20:49:49.630941 7484 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 20:49:49.643831 master-0 kubenswrapper[7484]: I0312 20:49:49.643779 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:49.784183 master-0 kubenswrapper[7484]: I0312 20:49:49.783773 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:49.817769 master-0 kubenswrapper[7484]: I0312 20:49:49.817735 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:49.862049 master-0 kubenswrapper[7484]: I0312 20:49:49.859542 7484 generic.go:334] "Generic (PLEG): container finished" podID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerID="accc03035ed32e15e8d41d3c28ac222345b1487c05148782dfac6e42d8ef00ab" exitCode=0 Mar 12 20:49:49.862049 master-0 kubenswrapper[7484]: I0312 20:49:49.859612 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerDied","Data":"accc03035ed32e15e8d41d3c28ac222345b1487c05148782dfac6e42d8ef00ab"} Mar 12 20:49:49.902210 master-0 kubenswrapper[7484]: I0312 20:49:49.902157 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:50.204723 master-0 kubenswrapper[7484]: I0312 20:49:50.200895 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-h26wj"] Mar 12 20:49:50.438169 master-0 kubenswrapper[7484]: I0312 20:49:50.437789 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:50.438448 master-0 kubenswrapper[7484]: I0312 20:49:50.438432 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:50.438566 master-0 kubenswrapper[7484]: E0312 20:49:50.438125 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:50.438619 master-0 kubenswrapper[7484]: I0312 20:49:50.438531 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:50.438656 master-0 kubenswrapper[7484]: E0312 20:49:50.438510 7484 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:50.438695 master-0 kubenswrapper[7484]: E0312 20:49:50.438653 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.438610467 +0000 UTC m=+4.923879269 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:50.438735 master-0 kubenswrapper[7484]: E0312 20:49:50.438695 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.438671978 +0000 UTC m=+4.923940780 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:50.438909 master-0 kubenswrapper[7484]: E0312 20:49:50.438889 7484 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:50.439171 master-0 kubenswrapper[7484]: E0312 20:49:50.439158 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.43914145 +0000 UTC m=+4.924410252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:50.439264 master-0 kubenswrapper[7484]: E0312 20:49:50.439095 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:50.439354 master-0 kubenswrapper[7484]: E0312 20:49:50.439344 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.439335235 +0000 UTC m=+4.924604037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:50.439413 master-0 kubenswrapper[7484]: I0312 20:49:50.439026 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:50.439500 master-0 kubenswrapper[7484]: I0312 20:49:50.439487 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:50.439589 master-0 kubenswrapper[7484]: I0312 20:49:50.439577 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:50.439673 master-0 kubenswrapper[7484]: E0312 20:49:50.439641 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:50.439718 master-0 kubenswrapper[7484]: E0312 20:49:50.439692 7484 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:50.439783 master-0 kubenswrapper[7484]: I0312 20:49:50.439769 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:50.439941 master-0 kubenswrapper[7484]: E0312 20:49:50.439854 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.439841548 +0000 UTC m=+4.925110350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:50.440024 master-0 kubenswrapper[7484]: E0312 20:49:50.440014 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.440002321 +0000 UTC m=+4.925271123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:50.440127 master-0 kubenswrapper[7484]: E0312 20:49:50.439906 7484 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:50.440193 master-0 kubenswrapper[7484]: E0312 20:49:50.440172 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.440157725 +0000 UTC m=+4.925426517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:50.440262 master-0 kubenswrapper[7484]: I0312 20:49:50.440113 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:50.440374 master-0 kubenswrapper[7484]: I0312 20:49:50.440360 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:50.440466 master-0 kubenswrapper[7484]: I0312 20:49:50.440455 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:50.440548 master-0 kubenswrapper[7484]: I0312 20:49:50.440536 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:50.440632 master-0 kubenswrapper[7484]: I0312 20:49:50.440621 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:50.440719 master-0 kubenswrapper[7484]: I0312 20:49:50.440707 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:50.440853 master-0 kubenswrapper[7484]: E0312 20:49:50.440309 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:50.440948 master-0 kubenswrapper[7484]: E0312 20:49:50.440938 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.440928725 +0000 UTC m=+4.926197527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:50.441114 master-0 kubenswrapper[7484]: E0312 20:49:50.440461 7484 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:49:50.441218 master-0 kubenswrapper[7484]: E0312 20:49:50.441198 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.441189291 +0000 UTC m=+4.926458093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:49:50.441296 master-0 kubenswrapper[7484]: E0312 20:49:50.440514 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:50.441376 master-0 kubenswrapper[7484]: E0312 20:49:50.441366 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.441358255 +0000 UTC m=+4.926627057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:50.441456 master-0 kubenswrapper[7484]: E0312 20:49:50.440632 7484 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:50.441547 master-0 kubenswrapper[7484]: E0312 20:49:50.441538 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.441530509 +0000 UTC m=+4.926799311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:50.441649 master-0 kubenswrapper[7484]: E0312 20:49:50.440701 7484 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:50.441732 master-0 kubenswrapper[7484]: E0312 20:49:50.441722 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.441715114 +0000 UTC m=+4.926983916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:50.443500 master-0 kubenswrapper[7484]: E0312 20:49:50.440750 7484 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:50.443500 master-0 kubenswrapper[7484]: E0312 20:49:50.443105 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.443071268 +0000 UTC m=+4.928340080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: I0312 20:49:50.792985 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv"] Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: E0312 20:49:50.793133 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerName="prober" Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: I0312 20:49:50.793143 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerName="prober" Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: E0312 20:49:50.793152 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: I0312 20:49:50.793158 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: I0312 20:49:50.793209 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="4730d5f8-ab17-4ba2-ae27-d2de62821372" containerName="prober" Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: I0312 20:49:50.793221 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 20:49:50.794833 master-0 kubenswrapper[7484]: I0312 20:49:50.793534 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 20:49:50.814337 master-0 kubenswrapper[7484]: I0312 20:49:50.811928 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 20:49:50.814337 master-0 kubenswrapper[7484]: I0312 20:49:50.812906 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv"] Mar 12 20:49:50.828128 master-0 kubenswrapper[7484]: I0312 20:49:50.826988 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:50.834891 master-0 kubenswrapper[7484]: I0312 20:49:50.834798 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 20:49:50.849947 master-0 kubenswrapper[7484]: I0312 20:49:50.848170 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2ph\" (UniqueName: \"kubernetes.io/projected/da40e787-dd75-4f4f-b09e-a8dab590f260-kube-api-access-xg2ph\") pod \"migrator-57ccdf9b5-jd4pv\" (UID: \"da40e787-dd75-4f4f-b09e-a8dab590f260\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 20:49:50.857177 master-0 kubenswrapper[7484]: I0312 20:49:50.856946 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:50.894388 master-0 kubenswrapper[7484]: I0312 20:49:50.887715 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" event={"ID":"2604b035-853c-42b7-a562-07d46178868a","Type":"ContainerStarted","Data":"6afc544c34ddbc5e6039dbdbeff607333e002100669f75e0bf5ff219b035f729"} Mar 12 20:49:50.896254 master-0 kubenswrapper[7484]: I0312 20:49:50.896220 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" event={"ID":"96bd86df-2101-47f5-844b-1332261c66f1","Type":"ContainerStarted","Data":"e6ccd74a2af6fdce722a0e3dca22b3f124868515fcf641e0b36f66e322f8d4c3"} Mar 12 20:49:50.913052 master-0 kubenswrapper[7484]: I0312 20:49:50.913006 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" event={"ID":"15ebfbd8-0782-431a-88a3-83af328498d2","Type":"ContainerStarted","Data":"2e532f48874103782c7daee8f162358860ddd2173af37648f345faae82db17a2"} Mar 12 20:49:50.919634 master-0 kubenswrapper[7484]: I0312 20:49:50.919596 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerStarted","Data":"a33a2903577092cf3a1f9c908ef309b6542edd2a9918f17c9c5bfb3802991a1e"} Mar 12 20:49:50.926857 master-0 kubenswrapper[7484]: I0312 20:49:50.926051 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" event={"ID":"a3bebf49-1d92-4353-b84c-91ed86b7bb94","Type":"ContainerStarted","Data":"4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6"} Mar 12 20:49:50.927984 master-0 kubenswrapper[7484]: I0312 20:49:50.927914 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" event={"ID":"5471994f-769e-4124-b7d0-01f5358fc18f","Type":"ContainerStarted","Data":"7ca674391c532a062d85de3aad380be9933e23e79819377498f98ef87ee56f1c"} Mar 12 20:49:50.945545 master-0 kubenswrapper[7484]: I0312 20:49:50.945466 7484 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="e46a8739f5b993539e6b61f8310bba6f93754f47cc10fbeca3d3b7bb6aa5cf59" exitCode=0 Mar 12 20:49:50.945937 master-0 kubenswrapper[7484]: I0312 20:49:50.945861 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerDied","Data":"e46a8739f5b993539e6b61f8310bba6f93754f47cc10fbeca3d3b7bb6aa5cf59"} Mar 12 20:49:50.950943 master-0 kubenswrapper[7484]: I0312 20:49:50.950905 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2ph\" (UniqueName: \"kubernetes.io/projected/da40e787-dd75-4f4f-b09e-a8dab590f260-kube-api-access-xg2ph\") pod \"migrator-57ccdf9b5-jd4pv\" (UID: \"da40e787-dd75-4f4f-b09e-a8dab590f260\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 20:49:50.959886 master-0 kubenswrapper[7484]: I0312 20:49:50.956907 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" event={"ID":"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c","Type":"ContainerStarted","Data":"e0a2c06e46bef70f1a83d73f16311ff0724aeeddd6bc3dab0e6a4952ddc0acb3"} Mar 12 20:49:50.966290 master-0 kubenswrapper[7484]: I0312 20:49:50.966234 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" event={"ID":"07542516-49c8-4e20-9b97-798fbff850a5","Type":"ContainerStarted","Data":"31932c207919d9fa7ba649bcc3b67b43788d2b23969a14459b9233c510ac6567"} Mar 12 20:49:50.972965 master-0 kubenswrapper[7484]: I0312 20:49:50.972926 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerStarted","Data":"0baf639c5d46bafa134b35ec6bda1e04194915bf6f2fc74defffc294b859ab5d"} Mar 12 20:49:50.980796 master-0 kubenswrapper[7484]: I0312 20:49:50.980765 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:50.980991 master-0 kubenswrapper[7484]: I0312 20:49:50.980981 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:50.981986 master-0 kubenswrapper[7484]: I0312 20:49:50.981966 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-h26wj" event={"ID":"5ad63582-bd60-41a1-9622-ee73ccf8a5e8","Type":"ContainerStarted","Data":"5f57fbb9626c7bfac9770852707fd4ad88d29729b73911befab731e82c4f312d"} Mar 12 20:49:50.982077 master-0 kubenswrapper[7484]: I0312 20:49:50.982066 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-h26wj" event={"ID":"5ad63582-bd60-41a1-9622-ee73ccf8a5e8","Type":"ContainerStarted","Data":"85f9c6fdf5bd5b95a4e9ca273a39f24bdd11f231f86bdf7cf1f6b3ef19542031"} Mar 12 20:49:50.982188 master-0 kubenswrapper[7484]: I0312 20:49:50.982175 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:50.999645 master-0 kubenswrapper[7484]: I0312 20:49:50.999357 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2ph\" (UniqueName: \"kubernetes.io/projected/da40e787-dd75-4f4f-b09e-a8dab590f260-kube-api-access-xg2ph\") pod \"migrator-57ccdf9b5-jd4pv\" (UID: \"da40e787-dd75-4f4f-b09e-a8dab590f260\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 20:49:51.120091 master-0 kubenswrapper[7484]: I0312 20:49:51.119950 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 20:49:51.431829 master-0 kubenswrapper[7484]: I0312 20:49:51.431747 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w"] Mar 12 20:49:51.433496 master-0 kubenswrapper[7484]: I0312 20:49:51.433474 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 20:49:51.449898 master-0 kubenswrapper[7484]: I0312 20:49:51.447640 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w"] Mar 12 20:49:51.496340 master-0 kubenswrapper[7484]: I0312 20:49:51.496298 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv"] Mar 12 20:49:51.511104 master-0 kubenswrapper[7484]: I0312 20:49:51.511064 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:51.524912 master-0 kubenswrapper[7484]: I0312 20:49:51.523557 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 20:49:51.558482 master-0 kubenswrapper[7484]: I0312 20:49:51.558422 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfspc\" (UniqueName: \"kubernetes.io/projected/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7-kube-api-access-mfspc\") pod \"csi-snapshot-controller-7577d6f48-8fk8w\" (UID: \"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 20:49:51.660704 master-0 kubenswrapper[7484]: I0312 20:49:51.660233 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfspc\" (UniqueName: \"kubernetes.io/projected/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7-kube-api-access-mfspc\") pod \"csi-snapshot-controller-7577d6f48-8fk8w\" (UID: \"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 20:49:51.682562 master-0 kubenswrapper[7484]: I0312 20:49:51.682450 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfspc\" (UniqueName: \"kubernetes.io/projected/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7-kube-api-access-mfspc\") pod \"csi-snapshot-controller-7577d6f48-8fk8w\" (UID: \"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 20:49:51.786674 master-0 kubenswrapper[7484]: I0312 20:49:51.786245 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 20:49:51.966352 master-0 kubenswrapper[7484]: I0312 20:49:51.966234 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w"] Mar 12 20:49:51.996837 master-0 kubenswrapper[7484]: I0312 20:49:51.996583 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"fafb7230532430a0db8a7bc3a9035465334c92f98efee0c32c29c3f4d6ecbcfd"} Mar 12 20:49:51.998700 master-0 kubenswrapper[7484]: I0312 20:49:51.997945 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" event={"ID":"da40e787-dd75-4f4f-b09e-a8dab590f260","Type":"ContainerStarted","Data":"334e8afc68a931f6350a0d282fa03b4333bfc31875bef1101770c4d5b423d760"} Mar 12 20:49:51.998700 master-0 kubenswrapper[7484]: I0312 20:49:51.998067 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:52.031265 master-0 kubenswrapper[7484]: I0312 20:49:52.031228 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m"] Mar 12 20:49:52.033388 master-0 kubenswrapper[7484]: I0312 20:49:52.033361 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.037541 master-0 kubenswrapper[7484]: I0312 20:49:52.037495 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 20:49:52.037944 master-0 kubenswrapper[7484]: I0312 20:49:52.037919 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 20:49:52.038246 master-0 kubenswrapper[7484]: I0312 20:49:52.038025 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 20:49:52.038246 master-0 kubenswrapper[7484]: I0312 20:49:52.038166 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 20:49:52.038616 master-0 kubenswrapper[7484]: I0312 20:49:52.038602 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 20:49:52.039398 master-0 kubenswrapper[7484]: I0312 20:49:52.039373 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 20:49:52.042009 master-0 kubenswrapper[7484]: I0312 20:49:52.041987 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m"] Mar 12 20:49:52.199311 master-0 kubenswrapper[7484]: I0312 20:49:52.199058 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.199311 master-0 kubenswrapper[7484]: I0312 20:49:52.199150 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cct7t\" (UniqueName: \"kubernetes.io/projected/2c56c540-5751-4ca6-b774-16e573950844-kube-api-access-cct7t\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.199311 master-0 kubenswrapper[7484]: I0312 20:49:52.199178 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.199311 master-0 kubenswrapper[7484]: I0312 20:49:52.199197 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.199311 master-0 kubenswrapper[7484]: I0312 20:49:52.199227 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.300217 master-0 kubenswrapper[7484]: I0312 20:49:52.300042 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.300217 master-0 kubenswrapper[7484]: I0312 20:49:52.300139 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.300217 master-0 kubenswrapper[7484]: I0312 20:49:52.300233 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cct7t\" (UniqueName: \"kubernetes.io/projected/2c56c540-5751-4ca6-b774-16e573950844-kube-api-access-cct7t\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.300745 master-0 kubenswrapper[7484]: I0312 20:49:52.300273 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.300745 master-0 kubenswrapper[7484]: I0312 20:49:52.300297 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.300745 master-0 kubenswrapper[7484]: E0312 20:49:52.300399 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 12 20:49:52.300745 master-0 kubenswrapper[7484]: E0312 20:49:52.300466 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.800446929 +0000 UTC m=+5.285715751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "openshift-global-ca" not found Mar 12 20:49:52.303470 master-0 kubenswrapper[7484]: E0312 20:49:52.303401 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 12 20:49:52.303579 master-0 kubenswrapper[7484]: E0312 20:49:52.303499 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.803477684 +0000 UTC m=+5.288746506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "config" not found Mar 12 20:49:52.305470 master-0 kubenswrapper[7484]: E0312 20:49:52.305410 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:52.305470 master-0 kubenswrapper[7484]: E0312 20:49:52.305469 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.805453163 +0000 UTC m=+5.290721985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "client-ca" not found Mar 12 20:49:52.305652 master-0 kubenswrapper[7484]: E0312 20:49:52.305538 7484 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:52.305897 master-0 kubenswrapper[7484]: E0312 20:49:52.305867 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:52.805561185 +0000 UTC m=+5.290829997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : secret "serving-cert" not found Mar 12 20:49:52.336266 master-0 kubenswrapper[7484]: I0312 20:49:52.336211 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cct7t\" (UniqueName: \"kubernetes.io/projected/2c56c540-5751-4ca6-b774-16e573950844-kube-api-access-cct7t\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502361 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502449 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502478 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502505 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502533 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502559 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502585 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502639 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502664 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502703 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502726 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502752 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:52.502983 master-0 kubenswrapper[7484]: I0312 20:49:52.502870 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503020 7484 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503034 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503081 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503062566 +0000 UTC m=+8.988331368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503098 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503091417 +0000 UTC m=+8.988360219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503159 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503197 7484 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503203 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.50318476 +0000 UTC m=+8.988453572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503160 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503224 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503215601 +0000 UTC m=+8.988484403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503245 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503230741 +0000 UTC m=+8.988499563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503246 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503283 7484 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503349 7484 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503287 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503277482 +0000 UTC m=+8.988546294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503401 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503373144 +0000 UTC m=+8.988642036 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503423 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503411255 +0000 UTC m=+8.988680067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503485 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503517 7484 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503529 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503513868 +0000 UTC m=+8.988782790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503558 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503549029 +0000 UTC m=+8.988817841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:52.503565 master-0 kubenswrapper[7484]: E0312 20:49:52.503206 7484 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:52.504255 master-0 kubenswrapper[7484]: E0312 20:49:52.503592 7484 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:52.504255 master-0 kubenswrapper[7484]: E0312 20:49:52.503592 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503585109 +0000 UTC m=+8.988853931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:52.504255 master-0 kubenswrapper[7484]: E0312 20:49:52.503647 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503634311 +0000 UTC m=+8.988903233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:52.504255 master-0 kubenswrapper[7484]: E0312 20:49:52.503892 7484 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:49:52.504255 master-0 kubenswrapper[7484]: E0312 20:49:52.503937 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.503925988 +0000 UTC m=+8.989194840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:49:52.581186 master-0 kubenswrapper[7484]: I0312 20:49:52.581138 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:49:52.807504 master-0 kubenswrapper[7484]: I0312 20:49:52.807377 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.807504 master-0 kubenswrapper[7484]: I0312 20:49:52.807425 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.807504 master-0 kubenswrapper[7484]: I0312 20:49:52.807463 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.807785 master-0 kubenswrapper[7484]: E0312 20:49:52.807666 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 12 20:49:52.807831 master-0 kubenswrapper[7484]: E0312 20:49:52.807786 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:53.807758422 +0000 UTC m=+6.293027224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "openshift-global-ca" not found Mar 12 20:49:52.807914 master-0 kubenswrapper[7484]: E0312 20:49:52.807851 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 12 20:49:52.807960 master-0 kubenswrapper[7484]: E0312 20:49:52.807950 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:53.807928216 +0000 UTC m=+6.293197018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "config" not found Mar 12 20:49:52.807997 master-0 kubenswrapper[7484]: E0312 20:49:52.807854 7484 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:52.808027 master-0 kubenswrapper[7484]: I0312 20:49:52.808015 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:52.808104 master-0 kubenswrapper[7484]: E0312 20:49:52.808055 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:52.808104 master-0 kubenswrapper[7484]: E0312 20:49:52.808057 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:53.808034469 +0000 UTC m=+6.293303281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : secret "serving-cert" not found Mar 12 20:49:52.808104 master-0 kubenswrapper[7484]: E0312 20:49:52.808080 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:53.8080749 +0000 UTC m=+6.293343702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "client-ca" not found Mar 12 20:49:52.858649 master-0 kubenswrapper[7484]: I0312 20:49:52.856985 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:52.863943 master-0 kubenswrapper[7484]: I0312 20:49:52.863676 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:53.015186 master-0 kubenswrapper[7484]: I0312 20:49:53.015130 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:53.106960 master-0 kubenswrapper[7484]: I0312 20:49:53.106901 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m"] Mar 12 20:49:53.107210 master-0 kubenswrapper[7484]: E0312 20:49:53.107174 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" podUID="2c56c540-5751-4ca6-b774-16e573950844" Mar 12 20:49:53.132723 master-0 kubenswrapper[7484]: I0312 20:49:53.132578 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5"] Mar 12 20:49:53.133201 master-0 kubenswrapper[7484]: I0312 20:49:53.133163 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.140062 master-0 kubenswrapper[7484]: I0312 20:49:53.138281 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 20:49:53.142616 master-0 kubenswrapper[7484]: I0312 20:49:53.142560 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 20:49:53.142966 master-0 kubenswrapper[7484]: I0312 20:49:53.142741 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 20:49:53.142966 master-0 kubenswrapper[7484]: I0312 20:49:53.142777 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 20:49:53.144225 master-0 kubenswrapper[7484]: I0312 20:49:53.144184 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 20:49:53.157251 master-0 kubenswrapper[7484]: I0312 20:49:53.156794 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5"] Mar 12 20:49:53.216695 master-0 kubenswrapper[7484]: I0312 20:49:53.216634 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.216695 master-0 kubenswrapper[7484]: I0312 20:49:53.216693 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.216959 master-0 kubenswrapper[7484]: I0312 20:49:53.216758 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4r6\" (UniqueName: \"kubernetes.io/projected/14b4689f-5630-461a-81a8-e8bb5a852259-kube-api-access-9j4r6\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.216959 master-0 kubenswrapper[7484]: I0312 20:49:53.216905 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-config\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.280370 master-0 kubenswrapper[7484]: I0312 20:49:53.280282 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-4zjqp"] Mar 12 20:49:53.282182 master-0 kubenswrapper[7484]: I0312 20:49:53.282154 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.284574 master-0 kubenswrapper[7484]: I0312 20:49:53.284515 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 20:49:53.284669 master-0 kubenswrapper[7484]: I0312 20:49:53.284574 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 20:49:53.284711 master-0 kubenswrapper[7484]: I0312 20:49:53.284684 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 20:49:53.284800 master-0 kubenswrapper[7484]: I0312 20:49:53.284522 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 20:49:53.290565 master-0 kubenswrapper[7484]: I0312 20:49:53.290496 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-4zjqp"] Mar 12 20:49:53.317644 master-0 kubenswrapper[7484]: I0312 20:49:53.317598 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-config\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.317790 master-0 kubenswrapper[7484]: I0312 20:49:53.317693 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.317790 master-0 kubenswrapper[7484]: I0312 20:49:53.317717 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.317888 master-0 kubenswrapper[7484]: I0312 20:49:53.317791 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4r6\" (UniqueName: \"kubernetes.io/projected/14b4689f-5630-461a-81a8-e8bb5a852259-kube-api-access-9j4r6\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.318592 master-0 kubenswrapper[7484]: E0312 20:49:53.318534 7484 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:53.318688 master-0 kubenswrapper[7484]: E0312 20:49:53.318658 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:53.818628524 +0000 UTC m=+6.303897516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : secret "serving-cert" not found Mar 12 20:49:53.318970 master-0 kubenswrapper[7484]: E0312 20:49:53.318932 7484 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:53.319303 master-0 kubenswrapper[7484]: E0312 20:49:53.319147 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:53.819121256 +0000 UTC m=+6.304390058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : configmap "client-ca" not found Mar 12 20:49:53.319404 master-0 kubenswrapper[7484]: I0312 20:49:53.319367 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-config\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.336282 master-0 kubenswrapper[7484]: I0312 20:49:53.336237 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4r6\" (UniqueName: \"kubernetes.io/projected/14b4689f-5630-461a-81a8-e8bb5a852259-kube-api-access-9j4r6\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.425041 master-0 kubenswrapper[7484]: I0312 20:49:53.422570 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-key\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.425041 master-0 kubenswrapper[7484]: I0312 20:49:53.422665 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsprq\" (UniqueName: \"kubernetes.io/projected/135ec6f3-fbc0-4840-a4b1-c1124c705161-kube-api-access-wsprq\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.425041 master-0 kubenswrapper[7484]: I0312 20:49:53.422819 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.524180 master-0 kubenswrapper[7484]: I0312 20:49:53.524103 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-key\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.524180 master-0 kubenswrapper[7484]: I0312 20:49:53.524184 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsprq\" (UniqueName: \"kubernetes.io/projected/135ec6f3-fbc0-4840-a4b1-c1124c705161-kube-api-access-wsprq\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.525328 master-0 kubenswrapper[7484]: I0312 20:49:53.525269 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.528176 master-0 kubenswrapper[7484]: I0312 20:49:53.527785 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-key\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.528730 master-0 kubenswrapper[7484]: I0312 20:49:53.528688 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.540312 master-0 kubenswrapper[7484]: I0312 20:49:53.540273 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsprq\" (UniqueName: \"kubernetes.io/projected/135ec6f3-fbc0-4840-a4b1-c1124c705161-kube-api-access-wsprq\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.610318 master-0 kubenswrapper[7484]: I0312 20:49:53.610244 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 20:49:53.833633 master-0 kubenswrapper[7484]: I0312 20:49:53.833572 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.833909 master-0 kubenswrapper[7484]: I0312 20:49:53.833672 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:53.833909 master-0 kubenswrapper[7484]: I0312 20:49:53.833745 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:53.833998 master-0 kubenswrapper[7484]: E0312 20:49:53.833882 7484 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:53.834157 master-0 kubenswrapper[7484]: E0312 20:49:53.834108 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:54.834059049 +0000 UTC m=+7.319327851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : configmap "client-ca" not found Mar 12 20:49:53.834157 master-0 kubenswrapper[7484]: E0312 20:49:53.834142 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:53.834262 master-0 kubenswrapper[7484]: E0312 20:49:53.834219 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:55.834210863 +0000 UTC m=+8.319479665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : configmap "client-ca" not found Mar 12 20:49:53.834594 master-0 kubenswrapper[7484]: E0312 20:49:53.834545 7484 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:53.834654 master-0 kubenswrapper[7484]: E0312 20:49:53.834612 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert podName:2c56c540-5751-4ca6-b774-16e573950844 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:55.834596752 +0000 UTC m=+8.319865554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert") pod "controller-manager-6f7fd6c796-bpd4m" (UID: "2c56c540-5751-4ca6-b774-16e573950844") : secret "serving-cert" not found Mar 12 20:49:53.834654 master-0 kubenswrapper[7484]: I0312 20:49:53.834370 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:53.834747 master-0 kubenswrapper[7484]: I0312 20:49:53.834661 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:53.834747 master-0 kubenswrapper[7484]: I0312 20:49:53.834689 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:53.834847 master-0 kubenswrapper[7484]: E0312 20:49:53.834796 7484 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:53.834847 master-0 kubenswrapper[7484]: E0312 20:49:53.834840 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:54.834833439 +0000 UTC m=+7.320102241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : secret "serving-cert" not found Mar 12 20:49:53.835061 master-0 kubenswrapper[7484]: I0312 20:49:53.835030 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:53.836106 master-0 kubenswrapper[7484]: I0312 20:49:53.836073 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-bpd4m\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:54.009188 master-0 kubenswrapper[7484]: I0312 20:49:54.009097 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:54.038244 master-0 kubenswrapper[7484]: I0312 20:49:54.035378 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:54.137941 master-0 kubenswrapper[7484]: I0312 20:49:54.137799 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") pod \"2c56c540-5751-4ca6-b774-16e573950844\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " Mar 12 20:49:54.137941 master-0 kubenswrapper[7484]: I0312 20:49:54.137889 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") pod \"2c56c540-5751-4ca6-b774-16e573950844\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " Mar 12 20:49:54.137941 master-0 kubenswrapper[7484]: I0312 20:49:54.137924 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cct7t\" (UniqueName: \"kubernetes.io/projected/2c56c540-5751-4ca6-b774-16e573950844-kube-api-access-cct7t\") pod \"2c56c540-5751-4ca6-b774-16e573950844\" (UID: \"2c56c540-5751-4ca6-b774-16e573950844\") " Mar 12 20:49:54.138430 master-0 kubenswrapper[7484]: I0312 20:49:54.138404 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2c56c540-5751-4ca6-b774-16e573950844" (UID: "2c56c540-5751-4ca6-b774-16e573950844"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:49:54.138618 master-0 kubenswrapper[7484]: I0312 20:49:54.138564 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config" (OuterVolumeSpecName: "config") pod "2c56c540-5751-4ca6-b774-16e573950844" (UID: "2c56c540-5751-4ca6-b774-16e573950844"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:49:54.141277 master-0 kubenswrapper[7484]: I0312 20:49:54.141255 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c56c540-5751-4ca6-b774-16e573950844-kube-api-access-cct7t" (OuterVolumeSpecName: "kube-api-access-cct7t") pod "2c56c540-5751-4ca6-b774-16e573950844" (UID: "2c56c540-5751-4ca6-b774-16e573950844"). InnerVolumeSpecName "kube-api-access-cct7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:49:54.248724 master-0 kubenswrapper[7484]: I0312 20:49:54.248418 7484 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:54.248724 master-0 kubenswrapper[7484]: I0312 20:49:54.248463 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cct7t\" (UniqueName: \"kubernetes.io/projected/2c56c540-5751-4ca6-b774-16e573950844-kube-api-access-cct7t\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:54.248724 master-0 kubenswrapper[7484]: I0312 20:49:54.248479 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:54.788421 master-0 kubenswrapper[7484]: I0312 20:49:54.788352 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:54.788694 master-0 kubenswrapper[7484]: I0312 20:49:54.788553 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:54.788694 master-0 kubenswrapper[7484]: I0312 20:49:54.788565 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:54.818776 master-0 kubenswrapper[7484]: I0312 20:49:54.818081 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:54.855848 master-0 kubenswrapper[7484]: I0312 20:49:54.855762 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:54.855848 master-0 kubenswrapper[7484]: I0312 20:49:54.855819 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:54.856136 master-0 kubenswrapper[7484]: E0312 20:49:54.856073 7484 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:54.856136 master-0 kubenswrapper[7484]: E0312 20:49:54.856113 7484 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:54.856262 master-0 kubenswrapper[7484]: E0312 20:49:54.856207 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.856178653 +0000 UTC m=+9.341447455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : secret "serving-cert" not found Mar 12 20:49:54.856447 master-0 kubenswrapper[7484]: E0312 20:49:54.856400 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.856360727 +0000 UTC m=+9.341629539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : configmap "client-ca" not found Mar 12 20:49:55.017127 master-0 kubenswrapper[7484]: I0312 20:49:55.016701 7484 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 20:49:55.018885 master-0 kubenswrapper[7484]: I0312 20:49:55.017741 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m" Mar 12 20:49:55.090284 master-0 kubenswrapper[7484]: I0312 20:49:55.090201 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m"] Mar 12 20:49:55.091568 master-0 kubenswrapper[7484]: I0312 20:49:55.091527 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-bpd4m"] Mar 12 20:49:55.263115 master-0 kubenswrapper[7484]: I0312 20:49:55.263039 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c56c540-5751-4ca6-b774-16e573950844-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:55.263115 master-0 kubenswrapper[7484]: I0312 20:49:55.263093 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c56c540-5751-4ca6-b774-16e573950844-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:49:55.739868 master-0 kubenswrapper[7484]: I0312 20:49:55.739315 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c56c540-5751-4ca6-b774-16e573950844" path="/var/lib/kubelet/pods/2c56c540-5751-4ca6-b774-16e573950844/volumes" Mar 12 20:49:55.790258 master-0 kubenswrapper[7484]: I0312 20:49:55.790213 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-4zjqp"] Mar 12 20:49:55.853132 master-0 kubenswrapper[7484]: W0312 20:49:55.852159 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod135ec6f3_fbc0_4840_a4b1_c1124c705161.slice/crio-61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630 WatchSource:0}: Error finding container 61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630: Status 404 returned error can't find the container with id 61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630 Mar 12 20:49:55.997644 master-0 kubenswrapper[7484]: I0312 20:49:55.997103 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:56.022507 master-0 kubenswrapper[7484]: I0312 20:49:56.022439 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" event={"ID":"da40e787-dd75-4f4f-b09e-a8dab590f260","Type":"ContainerStarted","Data":"5a3905f8b3eea02afdf658796c0005136831f1ebc8b9d4afca8f8596bfe8d28a"} Mar 12 20:49:56.022507 master-0 kubenswrapper[7484]: I0312 20:49:56.022502 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" event={"ID":"da40e787-dd75-4f4f-b09e-a8dab590f260","Type":"ContainerStarted","Data":"989eab2fc6375f9bb33bb57e21b64bce0976704fe7c7cf23fc74f74a3380876f"} Mar 12 20:49:56.024547 master-0 kubenswrapper[7484]: I0312 20:49:56.024470 7484 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="07c6a141800c2671b4fee399e997579f35911c7306dc3f2e97ee3647edd96e2d" exitCode=0 Mar 12 20:49:56.024637 master-0 kubenswrapper[7484]: I0312 20:49:56.024582 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerDied","Data":"07c6a141800c2671b4fee399e997579f35911c7306dc3f2e97ee3647edd96e2d"} Mar 12 20:49:56.024725 master-0 kubenswrapper[7484]: I0312 20:49:56.024699 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 20:49:56.026887 master-0 kubenswrapper[7484]: I0312 20:49:56.026057 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" event={"ID":"135ec6f3-fbc0-4840-a4b1-c1124c705161","Type":"ContainerStarted","Data":"15d0d26804c9c80b6799cf88166882aaa90b3995069ea002665cca02980190e3"} Mar 12 20:49:56.026887 master-0 kubenswrapper[7484]: I0312 20:49:56.026107 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" event={"ID":"135ec6f3-fbc0-4840-a4b1-c1124c705161","Type":"ContainerStarted","Data":"61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630"} Mar 12 20:49:56.028632 master-0 kubenswrapper[7484]: I0312 20:49:56.028588 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9"} Mar 12 20:49:56.031202 master-0 kubenswrapper[7484]: I0312 20:49:56.031059 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerStarted","Data":"304a25d963544d2c18d9e9c47ad4423b6984ff4ce290c819f6e1953a03bd9e6b"} Mar 12 20:49:56.038277 master-0 kubenswrapper[7484]: I0312 20:49:56.038210 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" podStartSLOduration=2.00091374 podStartE2EDuration="6.038190221s" podCreationTimestamp="2026-03-12 20:49:50 +0000 UTC" firstStartedPulling="2026-03-12 20:49:51.518470636 +0000 UTC m=+4.003739428" lastFinishedPulling="2026-03-12 20:49:55.555747097 +0000 UTC m=+8.041015909" observedRunningTime="2026-03-12 20:49:56.035763082 +0000 UTC m=+8.521031894" watchObservedRunningTime="2026-03-12 20:49:56.038190221 +0000 UTC m=+8.523459013" Mar 12 20:49:56.064438 master-0 kubenswrapper[7484]: I0312 20:49:56.064305 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" podStartSLOduration=3.064285181 podStartE2EDuration="3.064285181s" podCreationTimestamp="2026-03-12 20:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:49:56.06345786 +0000 UTC m=+8.548726672" watchObservedRunningTime="2026-03-12 20:49:56.064285181 +0000 UTC m=+8.549553983" Mar 12 20:49:56.105970 master-0 kubenswrapper[7484]: I0312 20:49:56.105652 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podStartSLOduration=1.437703162 podStartE2EDuration="5.105627688s" podCreationTimestamp="2026-03-12 20:49:51 +0000 UTC" firstStartedPulling="2026-03-12 20:49:51.986517754 +0000 UTC m=+4.471786576" lastFinishedPulling="2026-03-12 20:49:55.6544423 +0000 UTC m=+8.139711102" observedRunningTime="2026-03-12 20:49:56.104035539 +0000 UTC m=+8.589304351" watchObservedRunningTime="2026-03-12 20:49:56.105627688 +0000 UTC m=+8.590896500" Mar 12 20:49:56.110847 master-0 kubenswrapper[7484]: I0312 20:49:56.110774 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7"] Mar 12 20:49:56.111443 master-0 kubenswrapper[7484]: I0312 20:49:56.111407 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.114247 master-0 kubenswrapper[7484]: I0312 20:49:56.114054 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 20:49:56.115015 master-0 kubenswrapper[7484]: I0312 20:49:56.114818 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 20:49:56.115015 master-0 kubenswrapper[7484]: I0312 20:49:56.114909 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 20:49:56.115015 master-0 kubenswrapper[7484]: I0312 20:49:56.114996 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 20:49:56.115215 master-0 kubenswrapper[7484]: I0312 20:49:56.115111 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 20:49:56.128973 master-0 kubenswrapper[7484]: I0312 20:49:56.128942 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7"] Mar 12 20:49:56.135870 master-0 kubenswrapper[7484]: I0312 20:49:56.135175 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 20:49:56.283323 master-0 kubenswrapper[7484]: I0312 20:49:56.283246 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.283870 master-0 kubenswrapper[7484]: I0312 20:49:56.283775 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-config\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.284077 master-0 kubenswrapper[7484]: I0312 20:49:56.284047 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-proxy-ca-bundles\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.284415 master-0 kubenswrapper[7484]: I0312 20:49:56.284384 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh2jd\" (UniqueName: \"kubernetes.io/projected/cfe559ee-f3eb-417f-9281-9a50e9af6de3-kube-api-access-wh2jd\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.284616 master-0 kubenswrapper[7484]: I0312 20:49:56.284590 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.385838 master-0 kubenswrapper[7484]: I0312 20:49:56.385685 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh2jd\" (UniqueName: \"kubernetes.io/projected/cfe559ee-f3eb-417f-9281-9a50e9af6de3-kube-api-access-wh2jd\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.385838 master-0 kubenswrapper[7484]: I0312 20:49:56.385757 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.385838 master-0 kubenswrapper[7484]: I0312 20:49:56.385783 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.385838 master-0 kubenswrapper[7484]: I0312 20:49:56.385833 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-config\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.385838 master-0 kubenswrapper[7484]: I0312 20:49:56.385854 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-proxy-ca-bundles\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.387184 master-0 kubenswrapper[7484]: E0312 20:49:56.387154 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:56.387322 master-0 kubenswrapper[7484]: E0312 20:49:56.387308 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.887286 +0000 UTC m=+9.372554812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : configmap "client-ca" not found Mar 12 20:49:56.387417 master-0 kubenswrapper[7484]: I0312 20:49:56.387396 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-proxy-ca-bundles\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.387560 master-0 kubenswrapper[7484]: E0312 20:49:56.387545 7484 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:56.387656 master-0 kubenswrapper[7484]: E0312 20:49:56.387644 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:56.8876328 +0000 UTC m=+9.372901612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : secret "serving-cert" not found Mar 12 20:49:56.388058 master-0 kubenswrapper[7484]: I0312 20:49:56.388030 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-config\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.409106 master-0 kubenswrapper[7484]: I0312 20:49:56.409060 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh2jd\" (UniqueName: \"kubernetes.io/projected/cfe559ee-f3eb-417f-9281-9a50e9af6de3-kube-api-access-wh2jd\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.414832 master-0 kubenswrapper[7484]: I0312 20:49:56.414791 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:56.419088 master-0 kubenswrapper[7484]: I0312 20:49:56.418880 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:49:56.591239 master-0 kubenswrapper[7484]: I0312 20:49:56.591207 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:49:56.591362 master-0 kubenswrapper[7484]: I0312 20:49:56.591347 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:49:56.591459 master-0 kubenswrapper[7484]: I0312 20:49:56.591448 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:49:56.591526 master-0 kubenswrapper[7484]: I0312 20:49:56.591515 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:49:56.591599 master-0 kubenswrapper[7484]: I0312 20:49:56.591588 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:49:56.591869 master-0 kubenswrapper[7484]: I0312 20:49:56.591854 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:49:56.591934 master-0 kubenswrapper[7484]: I0312 20:49:56.591921 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:49:56.592012 master-0 kubenswrapper[7484]: I0312 20:49:56.592001 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:49:56.592077 master-0 kubenswrapper[7484]: I0312 20:49:56.592067 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:56.592149 master-0 kubenswrapper[7484]: I0312 20:49:56.592137 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:49:56.592217 master-0 kubenswrapper[7484]: I0312 20:49:56.592206 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:49:56.592306 master-0 kubenswrapper[7484]: I0312 20:49:56.592294 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:49:56.592376 master-0 kubenswrapper[7484]: I0312 20:49:56.592365 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:49:56.592533 master-0 kubenswrapper[7484]: E0312 20:49:56.592522 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:49:56.592621 master-0 kubenswrapper[7484]: E0312 20:49:56.592611 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.592597642 +0000 UTC m=+17.077866444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:49:56.593016 master-0 kubenswrapper[7484]: E0312 20:49:56.593003 7484 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:49:56.593109 master-0 kubenswrapper[7484]: E0312 20:49:56.593100 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.593090637 +0000 UTC m=+17.078359439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:49:56.593197 master-0 kubenswrapper[7484]: E0312 20:49:56.593188 7484 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:56.593262 master-0 kubenswrapper[7484]: E0312 20:49:56.593254 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.593246962 +0000 UTC m=+17.078515764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:49:56.593349 master-0 kubenswrapper[7484]: E0312 20:49:56.593339 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:49:56.593440 master-0 kubenswrapper[7484]: E0312 20:49:56.593431 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.593423787 +0000 UTC m=+17.078692589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:49:56.593534 master-0 kubenswrapper[7484]: E0312 20:49:56.593524 7484 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:56.593599 master-0 kubenswrapper[7484]: E0312 20:49:56.593591 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls podName:2b71f537-1cc2-4645-8e50-23941635457c nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.593584161 +0000 UTC m=+17.078852963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls") pod "ingress-operator-677db989d6-qpf68" (UID: "2b71f537-1cc2-4645-8e50-23941635457c") : secret "metrics-tls" not found Mar 12 20:49:56.593688 master-0 kubenswrapper[7484]: E0312 20:49:56.593679 7484 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 12 20:49:56.593753 master-0 kubenswrapper[7484]: E0312 20:49:56.593745 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls podName:900228dd-2d21-4759-87da-b027b0134ad8 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.593737545 +0000 UTC m=+17.079006347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-hmtz5" (UID: "900228dd-2d21-4759-87da-b027b0134ad8") : secret "image-registry-operator-tls" not found Mar 12 20:49:56.593855 master-0 kubenswrapper[7484]: E0312 20:49:56.593845 7484 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:56.593921 master-0 kubenswrapper[7484]: E0312 20:49:56.593913 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert podName:1a307172-f010-4bad-a3fc-31607574b069 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.5939059 +0000 UTC m=+17.079174702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert") pod "cluster-version-operator-745944c6b7-wddgl" (UID: "1a307172-f010-4bad-a3fc-31607574b069") : secret "cluster-version-operator-serving-cert" not found Mar 12 20:49:56.594014 master-0 kubenswrapper[7484]: E0312 20:49:56.594004 7484 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:49:56.594085 master-0 kubenswrapper[7484]: E0312 20:49:56.594076 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.594067745 +0000 UTC m=+17.079336547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:49:56.594167 master-0 kubenswrapper[7484]: E0312 20:49:56.594158 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:49:56.594231 master-0 kubenswrapper[7484]: E0312 20:49:56.594222 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.594215769 +0000 UTC m=+17.079484571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:49:56.594312 master-0 kubenswrapper[7484]: E0312 20:49:56.594302 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 12 20:49:56.594383 master-0 kubenswrapper[7484]: E0312 20:49:56.594373 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.594366784 +0000 UTC m=+17.079635586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "node-tuning-operator-tls" not found Mar 12 20:49:56.594469 master-0 kubenswrapper[7484]: E0312 20:49:56.594458 7484 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:56.594532 master-0 kubenswrapper[7484]: E0312 20:49:56.594523 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert podName:981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.594516988 +0000 UTC m=+17.079785780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-69rp9" (UID: "981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9") : secret "performance-addon-operator-webhook-cert" not found Mar 12 20:49:56.594619 master-0 kubenswrapper[7484]: E0312 20:49:56.594609 7484 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 12 20:49:56.594682 master-0 kubenswrapper[7484]: E0312 20:49:56.594674 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls podName:855747e5-d9b4-4eef-8bc4-425d6a8e95c7 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.594667743 +0000 UTC m=+17.079936545 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls") pod "dns-operator-589895fbb7-tvrxp" (UID: "855747e5-d9b4-4eef-8bc4-425d6a8e95c7") : secret "metrics-tls" not found Mar 12 20:49:56.594838 master-0 kubenswrapper[7484]: E0312 20:49:56.594827 7484 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:49:56.594923 master-0 kubenswrapper[7484]: E0312 20:49:56.594914 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.594898259 +0000 UTC m=+17.080167061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:49:56.898722 master-0 kubenswrapper[7484]: I0312 20:49:56.898660 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:56.898722 master-0 kubenswrapper[7484]: I0312 20:49:56.898711 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:49:56.899060 master-0 kubenswrapper[7484]: I0312 20:49:56.898750 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.899060 master-0 kubenswrapper[7484]: I0312 20:49:56.898773 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:56.899060 master-0 kubenswrapper[7484]: E0312 20:49:56.898907 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:56.899060 master-0 kubenswrapper[7484]: E0312 20:49:56.898957 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:57.898943385 +0000 UTC m=+10.384212187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : configmap "client-ca" not found Mar 12 20:49:56.899463 master-0 kubenswrapper[7484]: E0312 20:49:56.899334 7484 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:56.899463 master-0 kubenswrapper[7484]: E0312 20:49:56.899358 7484 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:56.899463 master-0 kubenswrapper[7484]: E0312 20:49:56.899394 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:00.899386107 +0000 UTC m=+13.384654909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : configmap "client-ca" not found Mar 12 20:49:56.899463 master-0 kubenswrapper[7484]: E0312 20:49:56.899434 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:00.899401548 +0000 UTC m=+13.384670450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : secret "serving-cert" not found Mar 12 20:49:56.899463 master-0 kubenswrapper[7484]: E0312 20:49:56.899461 7484 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:56.899679 master-0 kubenswrapper[7484]: E0312 20:49:56.899498 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:57.89948958 +0000 UTC m=+10.384758512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : secret "serving-cert" not found Mar 12 20:49:56.967071 master-0 kubenswrapper[7484]: I0312 20:49:56.966569 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:49:57.975020 master-0 kubenswrapper[7484]: I0312 20:49:57.974785 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:57.975020 master-0 kubenswrapper[7484]: I0312 20:49:57.974912 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:49:57.976511 master-0 kubenswrapper[7484]: E0312 20:49:57.975116 7484 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:49:57.976511 master-0 kubenswrapper[7484]: E0312 20:49:57.975227 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:59.975198122 +0000 UTC m=+12.460466964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : secret "serving-cert" not found Mar 12 20:49:57.976511 master-0 kubenswrapper[7484]: E0312 20:49:57.975775 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:49:57.976511 master-0 kubenswrapper[7484]: E0312 20:49:57.975871 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:49:59.97585232 +0000 UTC m=+12.461121212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : configmap "client-ca" not found Mar 12 20:49:59.979422 master-0 kubenswrapper[7484]: I0312 20:49:59.979028 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:50:00.010621 master-0 kubenswrapper[7484]: I0312 20:50:00.010532 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:00.010972 master-0 kubenswrapper[7484]: I0312 20:50:00.010909 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:00.011148 master-0 kubenswrapper[7484]: E0312 20:50:00.011089 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:50:00.011433 master-0 kubenswrapper[7484]: E0312 20:50:00.011383 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:04.01134196 +0000 UTC m=+16.496610792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : configmap "client-ca" not found Mar 12 20:50:00.019206 master-0 kubenswrapper[7484]: I0312 20:50:00.019120 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:00.059453 master-0 kubenswrapper[7484]: I0312 20:50:00.059365 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerStarted","Data":"01e107c0f774c1f8391b548269ef79446449d21fef49690cb86fce489a21f185"} Mar 12 20:50:00.952241 master-0 kubenswrapper[7484]: I0312 20:50:00.951876 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:00.952458 master-0 kubenswrapper[7484]: I0312 20:50:00.952254 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:00.952458 master-0 kubenswrapper[7484]: E0312 20:50:00.952120 7484 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:50:00.952524 master-0 kubenswrapper[7484]: E0312 20:50:00.952471 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:08.952432187 +0000 UTC m=+21.437701199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : secret "serving-cert" not found Mar 12 20:50:00.952524 master-0 kubenswrapper[7484]: E0312 20:50:00.952513 7484 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:50:00.952632 master-0 kubenswrapper[7484]: E0312 20:50:00.952599 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:08.952572221 +0000 UTC m=+21.437841053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : configmap "client-ca" not found Mar 12 20:50:01.881358 master-0 kubenswrapper[7484]: I0312 20:50:01.881258 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-75bc5477df-fvl5w"] Mar 12 20:50:01.882434 master-0 kubenswrapper[7484]: I0312 20:50:01.882342 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.886857 master-0 kubenswrapper[7484]: I0312 20:50:01.886764 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 20:50:01.890068 master-0 kubenswrapper[7484]: I0312 20:50:01.890009 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 20:50:01.890270 master-0 kubenswrapper[7484]: I0312 20:50:01.890146 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 20:50:01.890382 master-0 kubenswrapper[7484]: I0312 20:50:01.890281 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 20:50:01.890514 master-0 kubenswrapper[7484]: I0312 20:50:01.890402 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 20:50:01.890615 master-0 kubenswrapper[7484]: I0312 20:50:01.890561 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 20:50:01.890706 master-0 kubenswrapper[7484]: I0312 20:50:01.890620 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 12 20:50:01.890844 master-0 kubenswrapper[7484]: I0312 20:50:01.890765 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 12 20:50:01.890966 master-0 kubenswrapper[7484]: I0312 20:50:01.890868 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 20:50:01.897445 master-0 kubenswrapper[7484]: I0312 20:50:01.897396 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 20:50:01.907854 master-0 kubenswrapper[7484]: I0312 20:50:01.907764 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-75bc5477df-fvl5w"] Mar 12 20:50:01.999398 master-0 kubenswrapper[7484]: I0312 20:50:01.999315 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-serving-ca\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999398 master-0 kubenswrapper[7484]: I0312 20:50:01.999384 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-config\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999398 master-0 kubenswrapper[7484]: I0312 20:50:01.999409 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-image-import-ca\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999471 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-encryption-config\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999492 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-serving-cert\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999524 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-audit-dir\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999546 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-client\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999574 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-trusted-ca-bundle\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999593 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92wv\" (UniqueName: \"kubernetes.io/projected/06f651ec-cc35-4660-8f6a-657af4877ac0-kube-api-access-g92wv\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999652 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-node-pullsecrets\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:01.999748 master-0 kubenswrapper[7484]: I0312 20:50:01.999735 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.071751 master-0 kubenswrapper[7484]: I0312 20:50:02.071288 7484 generic.go:334] "Generic (PLEG): container finished" podID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerID="304a25d963544d2c18d9e9c47ad4423b6984ff4ce290c819f6e1953a03bd9e6b" exitCode=0 Mar 12 20:50:02.071751 master-0 kubenswrapper[7484]: I0312 20:50:02.071350 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerDied","Data":"304a25d963544d2c18d9e9c47ad4423b6984ff4ce290c819f6e1953a03bd9e6b"} Mar 12 20:50:02.072070 master-0 kubenswrapper[7484]: I0312 20:50:02.071952 7484 scope.go:117] "RemoveContainer" containerID="304a25d963544d2c18d9e9c47ad4423b6984ff4ce290c819f6e1953a03bd9e6b" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100480 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100538 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-serving-ca\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100556 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-config\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100571 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-image-import-ca\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100613 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-encryption-config\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100627 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-serving-cert\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100642 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-audit-dir\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100660 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-client\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100686 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-trusted-ca-bundle\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100703 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92wv\" (UniqueName: \"kubernetes.io/projected/06f651ec-cc35-4660-8f6a-657af4877ac0-kube-api-access-g92wv\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.100751 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-node-pullsecrets\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: E0312 20:50:02.100891 7484 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: E0312 20:50:02.100939 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit podName:06f651ec-cc35-4660-8f6a-657af4877ac0 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:02.600924607 +0000 UTC m=+15.086193399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit") pod "apiserver-75bc5477df-fvl5w" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0") : configmap "audit-0" not found Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.101412 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-node-pullsecrets\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.101462 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-audit-dir\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.101763 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-serving-ca\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.102012 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-config\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.102444 master-0 kubenswrapper[7484]: I0312 20:50:02.102380 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-image-import-ca\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.104315 master-0 kubenswrapper[7484]: I0312 20:50:02.103359 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-trusted-ca-bundle\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.105921 master-0 kubenswrapper[7484]: I0312 20:50:02.105544 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-client\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.105921 master-0 kubenswrapper[7484]: I0312 20:50:02.105877 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-encryption-config\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.112102 master-0 kubenswrapper[7484]: I0312 20:50:02.112058 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-serving-cert\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.120286 master-0 kubenswrapper[7484]: I0312 20:50:02.120220 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92wv\" (UniqueName: \"kubernetes.io/projected/06f651ec-cc35-4660-8f6a-657af4877ac0-kube-api-access-g92wv\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.322985 master-0 kubenswrapper[7484]: I0312 20:50:02.322709 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:50:02.606951 master-0 kubenswrapper[7484]: I0312 20:50:02.606863 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:02.607270 master-0 kubenswrapper[7484]: E0312 20:50:02.607203 7484 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 20:50:02.607417 master-0 kubenswrapper[7484]: E0312 20:50:02.607377 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit podName:06f651ec-cc35-4660-8f6a-657af4877ac0 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:03.607337161 +0000 UTC m=+16.092606143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit") pod "apiserver-75bc5477df-fvl5w" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0") : configmap "audit-0" not found Mar 12 20:50:02.967585 master-0 kubenswrapper[7484]: I0312 20:50:02.967408 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:50:03.078256 master-0 kubenswrapper[7484]: I0312 20:50:03.078182 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerStarted","Data":"9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1"} Mar 12 20:50:03.078658 master-0 kubenswrapper[7484]: I0312 20:50:03.078607 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:50:03.621113 master-0 kubenswrapper[7484]: I0312 20:50:03.620683 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:03.621113 master-0 kubenswrapper[7484]: E0312 20:50:03.620973 7484 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 20:50:03.621462 master-0 kubenswrapper[7484]: E0312 20:50:03.621191 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit podName:06f651ec-cc35-4660-8f6a-657af4877ac0 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:05.621170718 +0000 UTC m=+18.106439540 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit") pod "apiserver-75bc5477df-fvl5w" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0") : configmap "audit-0" not found Mar 12 20:50:04.028053 master-0 kubenswrapper[7484]: I0312 20:50:04.027971 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:04.029036 master-0 kubenswrapper[7484]: E0312 20:50:04.028286 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:50:04.029036 master-0 kubenswrapper[7484]: E0312 20:50:04.028439 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:12.028395925 +0000 UTC m=+24.513664897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : configmap "client-ca" not found Mar 12 20:50:04.638441 master-0 kubenswrapper[7484]: I0312 20:50:04.638366 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638454 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638486 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638512 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638540 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638564 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638594 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638647 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:50:04.638736 master-0 kubenswrapper[7484]: I0312 20:50:04.638723 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:50:04.638993 master-0 kubenswrapper[7484]: I0312 20:50:04.638763 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:50:04.639398 master-0 kubenswrapper[7484]: E0312 20:50:04.639310 7484 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 12 20:50:04.639550 master-0 kubenswrapper[7484]: E0312 20:50:04.639513 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs podName:f8f4400c-474c-480f-b46c-cf7c80555004 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.639460563 +0000 UTC m=+33.124729545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs") pod "multus-admission-controller-8d675b596-98j9w" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004") : secret "multus-admission-controller-secret" not found Mar 12 20:50:04.639666 master-0 kubenswrapper[7484]: E0312 20:50:04.639600 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 12 20:50:04.639786 master-0 kubenswrapper[7484]: E0312 20:50:04.639727 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 12 20:50:04.639850 master-0 kubenswrapper[7484]: E0312 20:50:04.639782 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert podName:54184647-6e9a-43f7-90b1-5d8815f8b1ab nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.639740432 +0000 UTC m=+33.125009274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-cdcc8" (UID: "54184647-6e9a-43f7-90b1-5d8815f8b1ab") : secret "package-server-manager-serving-cert" not found Mar 12 20:50:04.639909 master-0 kubenswrapper[7484]: E0312 20:50:04.639876 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert podName:07330030-487d-4fa6-b5c3-67607355bbba nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.639845955 +0000 UTC m=+33.125114897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert") pod "olm-operator-d64cfc9db-q9hnk" (UID: "07330030-487d-4fa6-b5c3-67607355bbba") : secret "olm-operator-serving-cert" not found Mar 12 20:50:04.640090 master-0 kubenswrapper[7484]: E0312 20:50:04.640047 7484 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 12 20:50:04.640126 master-0 kubenswrapper[7484]: I0312 20:50:04.640097 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:50:04.640154 master-0 kubenswrapper[7484]: E0312 20:50:04.640124 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics podName:e624e623-6d59-444d-b548-165fa5fd2581 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.640106172 +0000 UTC m=+33.125375014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-hxqgw" (UID: "e624e623-6d59-444d-b548-165fa5fd2581") : secret "marketplace-operator-metrics" not found Mar 12 20:50:04.640236 master-0 kubenswrapper[7484]: I0312 20:50:04.640204 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:50:04.640268 master-0 kubenswrapper[7484]: E0312 20:50:04.640237 7484 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 12 20:50:04.640296 master-0 kubenswrapper[7484]: I0312 20:50:04.640268 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:50:04.640347 master-0 kubenswrapper[7484]: E0312 20:50:04.640330 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs podName:c8660437-633f-4132-8a61-fe998abb493e nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.640298498 +0000 UTC m=+33.125567540 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs") pod "network-metrics-daemon-brdcd" (UID: "c8660437-633f-4132-8a61-fe998abb493e") : secret "metrics-daemon-secret" not found Mar 12 20:50:04.640485 master-0 kubenswrapper[7484]: E0312 20:50:04.640448 7484 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 12 20:50:04.640531 master-0 kubenswrapper[7484]: E0312 20:50:04.640508 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls podName:02649264-040a-41a6-9a41-8bf6416c68ff nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.640494303 +0000 UTC m=+33.125763145 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-j9tpt" (UID: "02649264-040a-41a6-9a41-8bf6416c68ff") : secret "cluster-monitoring-operator-tls" not found Mar 12 20:50:04.640571 master-0 kubenswrapper[7484]: E0312 20:50:04.640449 7484 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 12 20:50:04.640625 master-0 kubenswrapper[7484]: E0312 20:50:04.640599 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert podName:98d99166-c42a-4169-87e8-4209570aec50 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:20.640578765 +0000 UTC m=+33.125847607 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert") pod "catalog-operator-7d9c49f57b-tpvl4" (UID: "98d99166-c42a-4169-87e8-4209570aec50") : secret "catalog-operator-serving-cert" not found Mar 12 20:50:04.645978 master-0 kubenswrapper[7484]: I0312 20:50:04.645926 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:50:04.646072 master-0 kubenswrapper[7484]: I0312 20:50:04.646041 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"cluster-version-operator-745944c6b7-wddgl\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:50:04.646473 master-0 kubenswrapper[7484]: I0312 20:50:04.646415 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:50:04.646787 master-0 kubenswrapper[7484]: I0312 20:50:04.646728 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:50:04.648107 master-0 kubenswrapper[7484]: I0312 20:50:04.648062 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:50:04.651913 master-0 kubenswrapper[7484]: I0312 20:50:04.651770 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:50:04.865571 master-0 kubenswrapper[7484]: I0312 20:50:04.865502 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 20:50:04.865894 master-0 kubenswrapper[7484]: I0312 20:50:04.865853 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 20:50:04.873882 master-0 kubenswrapper[7484]: I0312 20:50:04.867145 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 20:50:04.873882 master-0 kubenswrapper[7484]: I0312 20:50:04.867796 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:50:04.873882 master-0 kubenswrapper[7484]: I0312 20:50:04.870400 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 20:50:04.936659 master-0 kubenswrapper[7484]: W0312 20:50:04.936317 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a307172_f010_4bad_a3fc_31607574b069.slice/crio-a8cc5f9e5cee5d74f6994e756dde73b1668f4705c942563115821df2efd277cf WatchSource:0}: Error finding container a8cc5f9e5cee5d74f6994e756dde73b1668f4705c942563115821df2efd277cf: Status 404 returned error can't find the container with id a8cc5f9e5cee5d74f6994e756dde73b1668f4705c942563115821df2efd277cf Mar 12 20:50:05.117427 master-0 kubenswrapper[7484]: I0312 20:50:05.096881 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" event={"ID":"1a307172-f010-4bad-a3fc-31607574b069","Type":"ContainerStarted","Data":"a8cc5f9e5cee5d74f6994e756dde73b1668f4705c942563115821df2efd277cf"} Mar 12 20:50:05.117427 master-0 kubenswrapper[7484]: I0312 20:50:05.101107 7484 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="01e107c0f774c1f8391b548269ef79446449d21fef49690cb86fce489a21f185" exitCode=0 Mar 12 20:50:05.117427 master-0 kubenswrapper[7484]: I0312 20:50:05.101220 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerDied","Data":"01e107c0f774c1f8391b548269ef79446449d21fef49690cb86fce489a21f185"} Mar 12 20:50:05.117427 master-0 kubenswrapper[7484]: I0312 20:50:05.102305 7484 scope.go:117] "RemoveContainer" containerID="01e107c0f774c1f8391b548269ef79446449d21fef49690cb86fce489a21f185" Mar 12 20:50:05.163380 master-0 kubenswrapper[7484]: I0312 20:50:05.162686 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-tvrxp"] Mar 12 20:50:05.185916 master-0 kubenswrapper[7484]: I0312 20:50:05.185469 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9"] Mar 12 20:50:05.264319 master-0 kubenswrapper[7484]: I0312 20:50:05.264280 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-qpf68"] Mar 12 20:50:05.276732 master-0 kubenswrapper[7484]: W0312 20:50:05.276660 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b71f537_1cc2_4645_8e50_23941635457c.slice/crio-6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87 WatchSource:0}: Error finding container 6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87: Status 404 returned error can't find the container with id 6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87 Mar 12 20:50:05.365872 master-0 kubenswrapper[7484]: I0312 20:50:05.365040 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5"] Mar 12 20:50:05.377538 master-0 kubenswrapper[7484]: W0312 20:50:05.377468 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900228dd_2d21_4759_87da_b027b0134ad8.slice/crio-369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a WatchSource:0}: Error finding container 369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a: Status 404 returned error can't find the container with id 369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a Mar 12 20:50:05.698740 master-0 kubenswrapper[7484]: I0312 20:50:05.698648 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit\") pod \"apiserver-75bc5477df-fvl5w\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:05.699086 master-0 kubenswrapper[7484]: E0312 20:50:05.698949 7484 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 12 20:50:05.699086 master-0 kubenswrapper[7484]: E0312 20:50:05.699037 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit podName:06f651ec-cc35-4660-8f6a-657af4877ac0 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:09.699011735 +0000 UTC m=+22.184280537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit") pod "apiserver-75bc5477df-fvl5w" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0") : configmap "audit-0" not found Mar 12 20:50:05.718662 master-0 kubenswrapper[7484]: I0312 20:50:05.716305 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-75bc5477df-fvl5w"] Mar 12 20:50:05.718662 master-0 kubenswrapper[7484]: E0312 20:50:05.716690 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" podUID="06f651ec-cc35-4660-8f6a-657af4877ac0" Mar 12 20:50:05.973361 master-0 kubenswrapper[7484]: I0312 20:50:05.973218 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:50:06.108962 master-0 kubenswrapper[7484]: I0312 20:50:06.108893 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" event={"ID":"900228dd-2d21-4759-87da-b027b0134ad8","Type":"ContainerStarted","Data":"369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a"} Mar 12 20:50:06.113164 master-0 kubenswrapper[7484]: I0312 20:50:06.113127 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerStarted","Data":"bd647ed768dc3b1c577a2e60500ea1b4e6063ec0776cd15c9345ee26565e55c6"} Mar 12 20:50:06.115029 master-0 kubenswrapper[7484]: I0312 20:50:06.114964 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87"} Mar 12 20:50:06.116368 master-0 kubenswrapper[7484]: I0312 20:50:06.116325 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" event={"ID":"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9","Type":"ContainerStarted","Data":"2fe791136ae6341fcef221b6feb3d2b2b4ae3ce3632fb3ef2ce720ffd2630304"} Mar 12 20:50:06.118552 master-0 kubenswrapper[7484]: I0312 20:50:06.118509 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" event={"ID":"855747e5-d9b4-4eef-8bc4-425d6a8e95c7","Type":"ContainerStarted","Data":"aa41b0d7c32641cd054893d0403c77199788601eccf56bdc2a5e82822618fbea"} Mar 12 20:50:06.119530 master-0 kubenswrapper[7484]: I0312 20:50:06.118590 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:06.129667 master-0 kubenswrapper[7484]: I0312 20:50:06.129619 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:06.309435 master-0 kubenswrapper[7484]: I0312 20:50:06.309307 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-encryption-config\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309435 master-0 kubenswrapper[7484]: I0312 20:50:06.309374 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-node-pullsecrets\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309435 master-0 kubenswrapper[7484]: I0312 20:50:06.309414 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-trusted-ca-bundle\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309435 master-0 kubenswrapper[7484]: I0312 20:50:06.309443 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-client\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309800 master-0 kubenswrapper[7484]: I0312 20:50:06.309498 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-serving-ca\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309800 master-0 kubenswrapper[7484]: I0312 20:50:06.309523 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-audit-dir\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309800 master-0 kubenswrapper[7484]: I0312 20:50:06.309545 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-serving-cert\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309800 master-0 kubenswrapper[7484]: I0312 20:50:06.309575 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g92wv\" (UniqueName: \"kubernetes.io/projected/06f651ec-cc35-4660-8f6a-657af4877ac0-kube-api-access-g92wv\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309800 master-0 kubenswrapper[7484]: I0312 20:50:06.309598 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-image-import-ca\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.309800 master-0 kubenswrapper[7484]: I0312 20:50:06.309622 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-config\") pod \"06f651ec-cc35-4660-8f6a-657af4877ac0\" (UID: \"06f651ec-cc35-4660-8f6a-657af4877ac0\") " Mar 12 20:50:06.310759 master-0 kubenswrapper[7484]: I0312 20:50:06.310697 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:06.310890 master-0 kubenswrapper[7484]: I0312 20:50:06.310822 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:06.310890 master-0 kubenswrapper[7484]: I0312 20:50:06.310871 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:06.310980 master-0 kubenswrapper[7484]: I0312 20:50:06.310920 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:06.311036 master-0 kubenswrapper[7484]: I0312 20:50:06.311009 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-config" (OuterVolumeSpecName: "config") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:06.311415 master-0 kubenswrapper[7484]: I0312 20:50:06.311344 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:06.335266 master-0 kubenswrapper[7484]: I0312 20:50:06.334747 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:06.335425 master-0 kubenswrapper[7484]: I0312 20:50:06.334774 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:06.335425 master-0 kubenswrapper[7484]: I0312 20:50:06.335140 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:06.335850 master-0 kubenswrapper[7484]: I0312 20:50:06.335773 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f651ec-cc35-4660-8f6a-657af4877ac0-kube-api-access-g92wv" (OuterVolumeSpecName: "kube-api-access-g92wv") pod "06f651ec-cc35-4660-8f6a-657af4877ac0" (UID: "06f651ec-cc35-4660-8f6a-657af4877ac0"). InnerVolumeSpecName "kube-api-access-g92wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410886 7484 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410931 7484 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410943 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410955 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g92wv\" (UniqueName: \"kubernetes.io/projected/06f651ec-cc35-4660-8f6a-657af4877ac0-kube-api-access-g92wv\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410967 7484 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410977 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.410956 master-0 kubenswrapper[7484]: I0312 20:50:06.410987 7484 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.411588 master-0 kubenswrapper[7484]: I0312 20:50:06.410999 7484 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06f651ec-cc35-4660-8f6a-657af4877ac0-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.411588 master-0 kubenswrapper[7484]: I0312 20:50:06.411010 7484 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:06.411588 master-0 kubenswrapper[7484]: I0312 20:50:06.411022 7484 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06f651ec-cc35-4660-8f6a-657af4877ac0-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:07.125153 master-0 kubenswrapper[7484]: I0312 20:50:07.125106 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-75bc5477df-fvl5w" Mar 12 20:50:07.125153 master-0 kubenswrapper[7484]: I0312 20:50:07.125089 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-krpjj" event={"ID":"617f0f9c-50d5-4214-b30f-5110fd4399ec","Type":"ContainerStarted","Data":"b78f8d3de0899faf453ad10334d0dbda8ca202f31c7e14a6105f0e777b6fb32d"} Mar 12 20:50:07.179476 master-0 kubenswrapper[7484]: I0312 20:50:07.176895 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-84fb785f4-kl52q"] Mar 12 20:50:07.179476 master-0 kubenswrapper[7484]: I0312 20:50:07.179033 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.184972 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.185017 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.185443 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.185733 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.185877 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.186056 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.188454 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.188723 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 20:50:07.189249 master-0 kubenswrapper[7484]: I0312 20:50:07.188835 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 20:50:07.198495 master-0 kubenswrapper[7484]: I0312 20:50:07.198446 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 20:50:07.199626 master-0 kubenswrapper[7484]: I0312 20:50:07.199583 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-75bc5477df-fvl5w"] Mar 12 20:50:07.205875 master-0 kubenswrapper[7484]: I0312 20:50:07.200628 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-75bc5477df-fvl5w"] Mar 12 20:50:07.205875 master-0 kubenswrapper[7484]: I0312 20:50:07.201591 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-84fb785f4-kl52q"] Mar 12 20:50:07.224476 master-0 kubenswrapper[7484]: I0312 20:50:07.223339 7484 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/06f651ec-cc35-4660-8f6a-657af4877ac0-audit\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:07.324098 master-0 kubenswrapper[7484]: I0312 20:50:07.324039 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-node-pullsecrets\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324319 master-0 kubenswrapper[7484]: I0312 20:50:07.324127 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-serving-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324319 master-0 kubenswrapper[7484]: I0312 20:50:07.324158 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-encryption-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324319 master-0 kubenswrapper[7484]: I0312 20:50:07.324224 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324319 master-0 kubenswrapper[7484]: I0312 20:50:07.324268 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-image-import-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324319 master-0 kubenswrapper[7484]: I0312 20:50:07.324290 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-trusted-ca-bundle\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324319 master-0 kubenswrapper[7484]: I0312 20:50:07.324311 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit-dir\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324641 master-0 kubenswrapper[7484]: I0312 20:50:07.324355 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhhz\" (UniqueName: \"kubernetes.io/projected/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-kube-api-access-qqhhz\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324641 master-0 kubenswrapper[7484]: I0312 20:50:07.324450 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-client\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324641 master-0 kubenswrapper[7484]: I0312 20:50:07.324483 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.324641 master-0 kubenswrapper[7484]: I0312 20:50:07.324538 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-serving-cert\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.425238 master-0 kubenswrapper[7484]: I0312 20:50:07.425121 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.425425 master-0 kubenswrapper[7484]: I0312 20:50:07.425378 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-serving-cert\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.425960 master-0 kubenswrapper[7484]: I0312 20:50:07.425880 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-node-pullsecrets\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426027 master-0 kubenswrapper[7484]: I0312 20:50:07.425988 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-node-pullsecrets\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426068 master-0 kubenswrapper[7484]: I0312 20:50:07.426043 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-serving-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426244 master-0 kubenswrapper[7484]: I0312 20:50:07.426210 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-encryption-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426426 master-0 kubenswrapper[7484]: I0312 20:50:07.426392 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426426 master-0 kubenswrapper[7484]: I0312 20:50:07.426411 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426500 master-0 kubenswrapper[7484]: I0312 20:50:07.426450 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-image-import-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426500 master-0 kubenswrapper[7484]: I0312 20:50:07.426474 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-trusted-ca-bundle\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426561 master-0 kubenswrapper[7484]: I0312 20:50:07.426517 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit-dir\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.426996 master-0 kubenswrapper[7484]: I0312 20:50:07.426951 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-serving-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.427104 master-0 kubenswrapper[7484]: I0312 20:50:07.427061 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhhz\" (UniqueName: \"kubernetes.io/projected/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-kube-api-access-qqhhz\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.427158 master-0 kubenswrapper[7484]: I0312 20:50:07.427141 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-client\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.427451 master-0 kubenswrapper[7484]: I0312 20:50:07.427406 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-trusted-ca-bundle\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.427513 master-0 kubenswrapper[7484]: I0312 20:50:07.427465 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit-dir\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.428024 master-0 kubenswrapper[7484]: I0312 20:50:07.427971 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.428613 master-0 kubenswrapper[7484]: I0312 20:50:07.428428 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-image-import-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.429583 master-0 kubenswrapper[7484]: I0312 20:50:07.429551 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-serving-cert\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.431428 master-0 kubenswrapper[7484]: I0312 20:50:07.431391 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-encryption-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.432390 master-0 kubenswrapper[7484]: I0312 20:50:07.432355 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-client\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.452761 master-0 kubenswrapper[7484]: I0312 20:50:07.452674 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhhz\" (UniqueName: \"kubernetes.io/projected/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-kube-api-access-qqhhz\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.500920 master-0 kubenswrapper[7484]: I0312 20:50:07.500862 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:07.740292 master-0 kubenswrapper[7484]: I0312 20:50:07.740219 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f651ec-cc35-4660-8f6a-657af4877ac0" path="/var/lib/kubelet/pods/06f651ec-cc35-4660-8f6a-657af4877ac0/volumes" Mar 12 20:50:08.989000 master-0 kubenswrapper[7484]: I0312 20:50:08.988925 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:08.989000 master-0 kubenswrapper[7484]: I0312 20:50:08.988982 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") pod \"route-controller-manager-7f8b99b9cb-tvsj5\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:08.989779 master-0 kubenswrapper[7484]: E0312 20:50:08.989089 7484 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:50:08.989779 master-0 kubenswrapper[7484]: E0312 20:50:08.989135 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:24.989121062 +0000 UTC m=+37.474389864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : configmap "client-ca" not found Mar 12 20:50:08.989779 master-0 kubenswrapper[7484]: E0312 20:50:08.989179 7484 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 12 20:50:08.989779 master-0 kubenswrapper[7484]: E0312 20:50:08.989249 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert podName:14b4689f-5630-461a-81a8-e8bb5a852259 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:24.989231306 +0000 UTC m=+37.474500108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert") pod "route-controller-manager-7f8b99b9cb-tvsj5" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259") : secret "serving-cert" not found Mar 12 20:50:10.266653 master-0 kubenswrapper[7484]: I0312 20:50:10.266145 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 20:50:10.267693 master-0 kubenswrapper[7484]: I0312 20:50:10.267615 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.278022 master-0 kubenswrapper[7484]: I0312 20:50:10.277377 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 20:50:10.301651 master-0 kubenswrapper[7484]: I0312 20:50:10.301226 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 20:50:10.320437 master-0 kubenswrapper[7484]: I0312 20:50:10.320374 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-var-lock\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.320677 master-0 kubenswrapper[7484]: I0312 20:50:10.320594 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35e2486-4d5e-43e5-89c0-c562002717bb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.321051 master-0 kubenswrapper[7484]: I0312 20:50:10.320990 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.423988 master-0 kubenswrapper[7484]: I0312 20:50:10.423004 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-var-lock\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.423988 master-0 kubenswrapper[7484]: I0312 20:50:10.423127 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35e2486-4d5e-43e5-89c0-c562002717bb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.423988 master-0 kubenswrapper[7484]: I0312 20:50:10.423208 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-var-lock\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.423988 master-0 kubenswrapper[7484]: I0312 20:50:10.423256 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:10.423988 master-0 kubenswrapper[7484]: I0312 20:50:10.423477 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:11.117166 master-0 kubenswrapper[7484]: I0312 20:50:11.117016 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35e2486-4d5e-43e5-89c0-c562002717bb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:11.231843 master-0 kubenswrapper[7484]: I0312 20:50:11.231128 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:12.052355 master-0 kubenswrapper[7484]: I0312 20:50:12.052260 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") pod \"controller-manager-7bdc948d9f-tqqj7\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:12.054032 master-0 kubenswrapper[7484]: E0312 20:50:12.052444 7484 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 12 20:50:12.054032 master-0 kubenswrapper[7484]: E0312 20:50:12.052556 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca podName:cfe559ee-f3eb-417f-9281-9a50e9af6de3 nodeName:}" failed. No retries permitted until 2026-03-12 20:50:28.052527017 +0000 UTC m=+40.537795849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca") pod "controller-manager-7bdc948d9f-tqqj7" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3") : configmap "client-ca" not found Mar 12 20:50:13.016536 master-0 kubenswrapper[7484]: I0312 20:50:13.016473 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7"] Mar 12 20:50:13.016738 master-0 kubenswrapper[7484]: E0312 20:50:13.016708 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" podUID="cfe559ee-f3eb-417f-9281-9a50e9af6de3" Mar 12 20:50:13.036117 master-0 kubenswrapper[7484]: I0312 20:50:13.035359 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5"] Mar 12 20:50:13.036117 master-0 kubenswrapper[7484]: E0312 20:50:13.035640 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" podUID="14b4689f-5630-461a-81a8-e8bb5a852259" Mar 12 20:50:13.208080 master-0 kubenswrapper[7484]: I0312 20:50:13.207994 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:13.208977 master-0 kubenswrapper[7484]: I0312 20:50:13.208418 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:13.214977 master-0 kubenswrapper[7484]: I0312 20:50:13.214946 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:13.218744 master-0 kubenswrapper[7484]: I0312 20:50:13.218701 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:13.267511 master-0 kubenswrapper[7484]: I0312 20:50:13.267349 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4r6\" (UniqueName: \"kubernetes.io/projected/14b4689f-5630-461a-81a8-e8bb5a852259-kube-api-access-9j4r6\") pod \"14b4689f-5630-461a-81a8-e8bb5a852259\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " Mar 12 20:50:13.267511 master-0 kubenswrapper[7484]: I0312 20:50:13.267415 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-config\") pod \"14b4689f-5630-461a-81a8-e8bb5a852259\" (UID: \"14b4689f-5630-461a-81a8-e8bb5a852259\") " Mar 12 20:50:13.267511 master-0 kubenswrapper[7484]: I0312 20:50:13.267438 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh2jd\" (UniqueName: \"kubernetes.io/projected/cfe559ee-f3eb-417f-9281-9a50e9af6de3-kube-api-access-wh2jd\") pod \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " Mar 12 20:50:13.267511 master-0 kubenswrapper[7484]: I0312 20:50:13.267458 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-config\") pod \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " Mar 12 20:50:13.267511 master-0 kubenswrapper[7484]: I0312 20:50:13.267505 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-proxy-ca-bundles\") pod \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " Mar 12 20:50:13.267511 master-0 kubenswrapper[7484]: I0312 20:50:13.267523 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") pod \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\" (UID: \"cfe559ee-f3eb-417f-9281-9a50e9af6de3\") " Mar 12 20:50:13.268916 master-0 kubenswrapper[7484]: I0312 20:50:13.268867 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-config" (OuterVolumeSpecName: "config") pod "14b4689f-5630-461a-81a8-e8bb5a852259" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:13.269295 master-0 kubenswrapper[7484]: I0312 20:50:13.269221 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-config" (OuterVolumeSpecName: "config") pod "cfe559ee-f3eb-417f-9281-9a50e9af6de3" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:13.269394 master-0 kubenswrapper[7484]: I0312 20:50:13.269246 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cfe559ee-f3eb-417f-9281-9a50e9af6de3" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:13.271769 master-0 kubenswrapper[7484]: I0312 20:50:13.271717 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe559ee-f3eb-417f-9281-9a50e9af6de3-kube-api-access-wh2jd" (OuterVolumeSpecName: "kube-api-access-wh2jd") pod "cfe559ee-f3eb-417f-9281-9a50e9af6de3" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3"). InnerVolumeSpecName "kube-api-access-wh2jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:13.272172 master-0 kubenswrapper[7484]: I0312 20:50:13.272142 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b4689f-5630-461a-81a8-e8bb5a852259-kube-api-access-9j4r6" (OuterVolumeSpecName: "kube-api-access-9j4r6") pod "14b4689f-5630-461a-81a8-e8bb5a852259" (UID: "14b4689f-5630-461a-81a8-e8bb5a852259"). InnerVolumeSpecName "kube-api-access-9j4r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:13.272294 master-0 kubenswrapper[7484]: I0312 20:50:13.272263 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cfe559ee-f3eb-417f-9281-9a50e9af6de3" (UID: "cfe559ee-f3eb-417f-9281-9a50e9af6de3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:13.368594 master-0 kubenswrapper[7484]: I0312 20:50:13.368543 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:13.368594 master-0 kubenswrapper[7484]: I0312 20:50:13.368583 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh2jd\" (UniqueName: \"kubernetes.io/projected/cfe559ee-f3eb-417f-9281-9a50e9af6de3-kube-api-access-wh2jd\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:13.368594 master-0 kubenswrapper[7484]: I0312 20:50:13.368593 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:13.368594 master-0 kubenswrapper[7484]: I0312 20:50:13.368601 7484 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:13.368594 master-0 kubenswrapper[7484]: I0312 20:50:13.368609 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe559ee-f3eb-417f-9281-9a50e9af6de3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:13.368594 master-0 kubenswrapper[7484]: I0312 20:50:13.368618 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j4r6\" (UniqueName: \"kubernetes.io/projected/14b4689f-5630-461a-81a8-e8bb5a852259-kube-api-access-9j4r6\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:14.214985 master-0 kubenswrapper[7484]: I0312 20:50:14.214572 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7" Mar 12 20:50:14.215770 master-0 kubenswrapper[7484]: I0312 20:50:14.215400 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5" Mar 12 20:50:14.253493 master-0 kubenswrapper[7484]: I0312 20:50:14.253410 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5"] Mar 12 20:50:14.254792 master-0 kubenswrapper[7484]: I0312 20:50:14.254759 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl"] Mar 12 20:50:14.255392 master-0 kubenswrapper[7484]: I0312 20:50:14.255376 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.255849 master-0 kubenswrapper[7484]: I0312 20:50:14.255798 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8b99b9cb-tvsj5"] Mar 12 20:50:14.262699 master-0 kubenswrapper[7484]: I0312 20:50:14.259909 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 20:50:14.262699 master-0 kubenswrapper[7484]: I0312 20:50:14.260538 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 20:50:14.262699 master-0 kubenswrapper[7484]: I0312 20:50:14.260699 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 20:50:14.262699 master-0 kubenswrapper[7484]: I0312 20:50:14.260793 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 20:50:14.262699 master-0 kubenswrapper[7484]: I0312 20:50:14.261405 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 20:50:14.263070 master-0 kubenswrapper[7484]: I0312 20:50:14.262824 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl"] Mar 12 20:50:14.291842 master-0 kubenswrapper[7484]: I0312 20:50:14.288537 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7"] Mar 12 20:50:14.298745 master-0 kubenswrapper[7484]: I0312 20:50:14.298662 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc948d9f-tqqj7"] Mar 12 20:50:14.392831 master-0 kubenswrapper[7484]: I0312 20:50:14.392725 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-config\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.393080 master-0 kubenswrapper[7484]: I0312 20:50:14.392904 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03748a30-dc0a-4804-b653-12ddc3cfb90b-serving-cert\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.393080 master-0 kubenswrapper[7484]: I0312 20:50:14.392931 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrwj\" (UniqueName: \"kubernetes.io/projected/03748a30-dc0a-4804-b653-12ddc3cfb90b-kube-api-access-ddrwj\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.393080 master-0 kubenswrapper[7484]: I0312 20:50:14.393004 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-client-ca\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.393214 master-0 kubenswrapper[7484]: I0312 20:50:14.393087 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfe559ee-f3eb-417f-9281-9a50e9af6de3-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:14.393214 master-0 kubenswrapper[7484]: I0312 20:50:14.393123 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14b4689f-5630-461a-81a8-e8bb5a852259-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:14.393214 master-0 kubenswrapper[7484]: I0312 20:50:14.393132 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14b4689f-5630-461a-81a8-e8bb5a852259-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:14.494421 master-0 kubenswrapper[7484]: I0312 20:50:14.494261 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-client-ca\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.494625 master-0 kubenswrapper[7484]: I0312 20:50:14.494435 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-config\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.494870 master-0 kubenswrapper[7484]: I0312 20:50:14.494782 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03748a30-dc0a-4804-b653-12ddc3cfb90b-serving-cert\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.495040 master-0 kubenswrapper[7484]: I0312 20:50:14.494897 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddrwj\" (UniqueName: \"kubernetes.io/projected/03748a30-dc0a-4804-b653-12ddc3cfb90b-kube-api-access-ddrwj\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.496038 master-0 kubenswrapper[7484]: I0312 20:50:14.495998 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-client-ca\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.496487 master-0 kubenswrapper[7484]: I0312 20:50:14.496426 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-config\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.502268 master-0 kubenswrapper[7484]: I0312 20:50:14.502223 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03748a30-dc0a-4804-b653-12ddc3cfb90b-serving-cert\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.524616 master-0 kubenswrapper[7484]: I0312 20:50:14.524420 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddrwj\" (UniqueName: \"kubernetes.io/projected/03748a30-dc0a-4804-b653-12ddc3cfb90b-kube-api-access-ddrwj\") pod \"route-controller-manager-5c8884dcfd-psljl\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.583823 master-0 kubenswrapper[7484]: I0312 20:50:14.583749 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:14.993247 master-0 kubenswrapper[7484]: I0312 20:50:14.993019 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7946996f87-nzb7c"] Mar 12 20:50:14.993970 master-0 kubenswrapper[7484]: I0312 20:50:14.993939 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:14.999251 master-0 kubenswrapper[7484]: I0312 20:50:14.998522 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 20:50:14.999251 master-0 kubenswrapper[7484]: I0312 20:50:14.998801 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 20:50:15.003615 master-0 kubenswrapper[7484]: I0312 20:50:15.002233 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 20:50:15.003615 master-0 kubenswrapper[7484]: I0312 20:50:15.002851 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 20:50:15.004575 master-0 kubenswrapper[7484]: I0312 20:50:15.004536 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 20:50:15.004714 master-0 kubenswrapper[7484]: I0312 20:50:15.004673 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 20:50:15.004829 master-0 kubenswrapper[7484]: I0312 20:50:15.004785 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 20:50:15.004949 master-0 kubenswrapper[7484]: I0312 20:50:15.004917 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7946996f87-nzb7c"] Mar 12 20:50:15.005502 master-0 kubenswrapper[7484]: I0312 20:50:15.005473 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 20:50:15.102006 master-0 kubenswrapper[7484]: I0312 20:50:15.101938 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcxd\" (UniqueName: \"kubernetes.io/projected/36bd483b-292e-4e82-99d6-daa612cd385a-kube-api-access-fmcxd\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102006 master-0 kubenswrapper[7484]: I0312 20:50:15.102027 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-audit-policies\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102319 master-0 kubenswrapper[7484]: I0312 20:50:15.102103 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-serving-cert\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102319 master-0 kubenswrapper[7484]: I0312 20:50:15.102124 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-serving-ca\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102319 master-0 kubenswrapper[7484]: I0312 20:50:15.102143 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-encryption-config\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102405 master-0 kubenswrapper[7484]: I0312 20:50:15.102326 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-client\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102405 master-0 kubenswrapper[7484]: I0312 20:50:15.102385 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36bd483b-292e-4e82-99d6-daa612cd385a-audit-dir\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.102504 master-0 kubenswrapper[7484]: I0312 20:50:15.102419 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-trusted-ca-bundle\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.204396 master-0 kubenswrapper[7484]: I0312 20:50:15.204329 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36bd483b-292e-4e82-99d6-daa612cd385a-audit-dir\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.204688 master-0 kubenswrapper[7484]: I0312 20:50:15.204534 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36bd483b-292e-4e82-99d6-daa612cd385a-audit-dir\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.204688 master-0 kubenswrapper[7484]: I0312 20:50:15.204546 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-trusted-ca-bundle\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.204836 master-0 kubenswrapper[7484]: I0312 20:50:15.204756 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcxd\" (UniqueName: \"kubernetes.io/projected/36bd483b-292e-4e82-99d6-daa612cd385a-kube-api-access-fmcxd\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.205057 master-0 kubenswrapper[7484]: I0312 20:50:15.205012 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-audit-policies\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.205373 master-0 kubenswrapper[7484]: I0312 20:50:15.205224 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-serving-cert\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.205373 master-0 kubenswrapper[7484]: I0312 20:50:15.205312 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-serving-ca\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.205373 master-0 kubenswrapper[7484]: I0312 20:50:15.205361 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-encryption-config\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.205531 master-0 kubenswrapper[7484]: I0312 20:50:15.205492 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-client\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.206192 master-0 kubenswrapper[7484]: I0312 20:50:15.205741 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-audit-policies\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.206598 master-0 kubenswrapper[7484]: I0312 20:50:15.206551 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-serving-ca\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.209341 master-0 kubenswrapper[7484]: I0312 20:50:15.209298 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-serving-cert\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.209865 master-0 kubenswrapper[7484]: I0312 20:50:15.209583 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-client\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.209865 master-0 kubenswrapper[7484]: I0312 20:50:15.209791 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-encryption-config\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.210191 master-0 kubenswrapper[7484]: I0312 20:50:15.210145 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-trusted-ca-bundle\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.221132 master-0 kubenswrapper[7484]: I0312 20:50:15.221084 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcxd\" (UniqueName: \"kubernetes.io/projected/36bd483b-292e-4e82-99d6-daa612cd385a-kube-api-access-fmcxd\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.319374 master-0 kubenswrapper[7484]: I0312 20:50:15.319271 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:15.607729 master-0 kubenswrapper[7484]: I0312 20:50:15.607646 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl"] Mar 12 20:50:15.631332 master-0 kubenswrapper[7484]: I0312 20:50:15.630439 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-84fb785f4-kl52q"] Mar 12 20:50:15.677223 master-0 kubenswrapper[7484]: I0312 20:50:15.676178 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 20:50:15.688431 master-0 kubenswrapper[7484]: I0312 20:50:15.688389 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7946996f87-nzb7c"] Mar 12 20:50:15.741949 master-0 kubenswrapper[7484]: I0312 20:50:15.741891 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b4689f-5630-461a-81a8-e8bb5a852259" path="/var/lib/kubelet/pods/14b4689f-5630-461a-81a8-e8bb5a852259/volumes" Mar 12 20:50:15.743903 master-0 kubenswrapper[7484]: I0312 20:50:15.742263 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe559ee-f3eb-417f-9281-9a50e9af6de3" path="/var/lib/kubelet/pods/cfe559ee-f3eb-417f-9281-9a50e9af6de3/volumes" Mar 12 20:50:16.026541 master-0 kubenswrapper[7484]: I0312 20:50:16.015310 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 20:50:16.026541 master-0 kubenswrapper[7484]: I0312 20:50:16.016015 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.026541 master-0 kubenswrapper[7484]: I0312 20:50:16.023427 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 12 20:50:16.059829 master-0 kubenswrapper[7484]: I0312 20:50:16.056240 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 20:50:16.122618 master-0 kubenswrapper[7484]: I0312 20:50:16.122574 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d69687f-b8a5-4643-8268-ce30df5db3bc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.122945 master-0 kubenswrapper[7484]: I0312 20:50:16.122930 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.123028 master-0 kubenswrapper[7484]: I0312 20:50:16.123017 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-var-lock\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.200828 master-0 kubenswrapper[7484]: I0312 20:50:16.196974 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-btxk2"] Mar 12 20:50:16.200828 master-0 kubenswrapper[7484]: I0312 20:50:16.197842 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.200828 master-0 kubenswrapper[7484]: I0312 20:50:16.198962 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw"] Mar 12 20:50:16.200828 master-0 kubenswrapper[7484]: I0312 20:50:16.199752 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.204554 master-0 kubenswrapper[7484]: I0312 20:50:16.204250 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 20:50:16.204554 master-0 kubenswrapper[7484]: I0312 20:50:16.204557 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 20:50:16.204730 master-0 kubenswrapper[7484]: I0312 20:50:16.204699 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 20:50:16.210109 master-0 kubenswrapper[7484]: I0312 20:50:16.210059 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 20:50:16.226831 master-0 kubenswrapper[7484]: I0312 20:50:16.225504 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d69687f-b8a5-4643-8268-ce30df5db3bc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.226831 master-0 kubenswrapper[7484]: I0312 20:50:16.225593 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.226831 master-0 kubenswrapper[7484]: I0312 20:50:16.225618 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-var-lock\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.226831 master-0 kubenswrapper[7484]: I0312 20:50:16.225736 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-var-lock\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.226831 master-0 kubenswrapper[7484]: I0312 20:50:16.226083 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.229248 master-0 kubenswrapper[7484]: I0312 20:50:16.229194 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw"] Mar 12 20:50:16.231855 master-0 kubenswrapper[7484]: I0312 20:50:16.231793 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" event={"ID":"900228dd-2d21-4759-87da-b027b0134ad8","Type":"ContainerStarted","Data":"86833dd41b14e8094351920793b00866703e058d522b46fbdbf250fbcc14c834"} Mar 12 20:50:16.246155 master-0 kubenswrapper[7484]: I0312 20:50:16.246100 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"94db3df404adc79b06f6d39bed7801ea1fff7c3b57f50edc7ba7be9ec19fc3ab"} Mar 12 20:50:16.246266 master-0 kubenswrapper[7484]: I0312 20:50:16.246159 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"ae373579849ec0d4a33d66c2a3f6f43fccdff39968b29197dcdc4792d7cd63f3"} Mar 12 20:50:16.264885 master-0 kubenswrapper[7484]: I0312 20:50:16.264829 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" event={"ID":"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9","Type":"ContainerStarted","Data":"ab35500d408324bc8f259a25814698a0950deafc4c75bcf972576200d718f280"} Mar 12 20:50:16.272612 master-0 kubenswrapper[7484]: I0312 20:50:16.270929 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d69687f-b8a5-4643-8268-ce30df5db3bc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.287609 master-0 kubenswrapper[7484]: I0312 20:50:16.287567 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" event={"ID":"855747e5-d9b4-4eef-8bc4-425d6a8e95c7","Type":"ContainerStarted","Data":"bb70d36892a5867588669a74fa85c73d08e6d420a61932f84faab17d04e5adfc"} Mar 12 20:50:16.287840 master-0 kubenswrapper[7484]: I0312 20:50:16.287826 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" event={"ID":"855747e5-d9b4-4eef-8bc4-425d6a8e95c7","Type":"ContainerStarted","Data":"f24deffea5d8a3b5e0df29b3d2d47f4f4b1b484a04438be498d07b483fc8095a"} Mar 12 20:50:16.289979 master-0 kubenswrapper[7484]: I0312 20:50:16.289961 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" event={"ID":"36bd483b-292e-4e82-99d6-daa612cd385a","Type":"ContainerStarted","Data":"201b5e76d89b86f520d80ea9c46f6a7725c7ca002a8f03f0377c76479fd51041"} Mar 12 20:50:16.291694 master-0 kubenswrapper[7484]: I0312 20:50:16.291679 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" event={"ID":"1a307172-f010-4bad-a3fc-31607574b069","Type":"ContainerStarted","Data":"23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e"} Mar 12 20:50:16.301881 master-0 kubenswrapper[7484]: I0312 20:50:16.301837 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n"] Mar 12 20:50:16.303132 master-0 kubenswrapper[7484]: I0312 20:50:16.303116 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.305110 master-0 kubenswrapper[7484]: I0312 20:50:16.305079 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 20:50:16.305607 master-0 kubenswrapper[7484]: I0312 20:50:16.305550 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a35e2486-4d5e-43e5-89c0-c562002717bb","Type":"ContainerStarted","Data":"ca135dffb90b35be61bb5a8b71e0d72551616de76459ae1d27cb43dd9577ced8"} Mar 12 20:50:16.306028 master-0 kubenswrapper[7484]: I0312 20:50:16.305978 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 20:50:16.309308 master-0 kubenswrapper[7484]: I0312 20:50:16.309268 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" event={"ID":"03748a30-dc0a-4804-b653-12ddc3cfb90b","Type":"ContainerStarted","Data":"89842820602b3f72aeb63fe6d750da0cc64cd69ab229df72a18b8463d012ba5f"} Mar 12 20:50:16.311478 master-0 kubenswrapper[7484]: I0312 20:50:16.311437 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 20:50:16.313019 master-0 kubenswrapper[7484]: I0312 20:50:16.312973 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" event={"ID":"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d","Type":"ContainerStarted","Data":"82318439026f9141cf283c68c9e568172986f95b3ac1b221e6be4eb35afea5e2"} Mar 12 20:50:16.317953 master-0 kubenswrapper[7484]: I0312 20:50:16.317747 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n"] Mar 12 20:50:16.327140 master-0 kubenswrapper[7484]: I0312 20:50:16.326779 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-tuned\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.327140 master-0 kubenswrapper[7484]: I0312 20:50:16.326849 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cf33c432-db42-4c6d-8ee4-f089e5bf8203-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.327140 master-0 kubenswrapper[7484]: I0312 20:50:16.326911 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-kubernetes\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.327325 master-0 kubenswrapper[7484]: I0312 20:50:16.327254 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-modprobe-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.327412 master-0 kubenswrapper[7484]: I0312 20:50:16.327359 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-lib-modules\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.327780 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-sys\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328199 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-host\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328322 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328399 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-tmp\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328423 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328442 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328527 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysconfig\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328554 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-systemd\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328572 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hp5\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-kube-api-access-x8hp5\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328592 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328616 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-var-lib-kubelet\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328664 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlt7h\" (UniqueName: \"kubernetes.io/projected/52839a08-0871-44d3-9d22-b2f6b4383b99-kube-api-access-hlt7h\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328730 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-run\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328749 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf33c432-db42-4c6d-8ee4-f089e5bf8203-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.331220 master-0 kubenswrapper[7484]: I0312 20:50:16.328919 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-conf\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.370925 master-0 kubenswrapper[7484]: I0312 20:50:16.370673 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=6.370654972 podStartE2EDuration="6.370654972s" podCreationTimestamp="2026-03-12 20:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:16.369938951 +0000 UTC m=+28.855207753" watchObservedRunningTime="2026-03-12 20:50:16.370654972 +0000 UTC m=+28.855923774" Mar 12 20:50:16.409294 master-0 kubenswrapper[7484]: I0312 20:50:16.409140 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434122 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-host\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434178 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434200 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-tmp\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434225 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434271 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b96dd10-18a0-49f8-b488-63fc2b23da39-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434286 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysconfig\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434303 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hp5\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-kube-api-access-x8hp5\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434320 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-run\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434343 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf33c432-db42-4c6d-8ee4-f089e5bf8203-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434361 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-conf\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434376 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-tuned\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434393 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cf33c432-db42-4c6d-8ee4-f089e5bf8203-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434408 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-kubernetes\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434435 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-sys\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434453 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434468 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434498 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-systemd\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434512 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434528 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-var-lib-kubelet\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434542 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlt7h\" (UniqueName: \"kubernetes.io/projected/52839a08-0871-44d3-9d22-b2f6b4383b99-kube-api-access-hlt7h\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434558 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhdz\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-kube-api-access-nhhdz\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434581 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434606 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434634 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-modprobe-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434649 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-lib-modules\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.434974 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-lib-modules\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.435021 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-host\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.435106 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.435575 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-sys\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.435726 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysconfig\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.436286 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-run\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.436938 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf33c432-db42-4c6d-8ee4-f089e5bf8203-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.437863 master-0 kubenswrapper[7484]: I0312 20:50:16.437156 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-conf\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.438960 master-0 kubenswrapper[7484]: I0312 20:50:16.438652 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-var-lib-kubelet\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.438960 master-0 kubenswrapper[7484]: I0312 20:50:16.438719 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-tmp\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.439024 master-0 kubenswrapper[7484]: I0312 20:50:16.438970 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-systemd\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.439061 master-0 kubenswrapper[7484]: I0312 20:50:16.439030 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.439136 master-0 kubenswrapper[7484]: I0312 20:50:16.439114 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-modprobe-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.439363 master-0 kubenswrapper[7484]: I0312 20:50:16.439298 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-kubernetes\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.439610 master-0 kubenswrapper[7484]: I0312 20:50:16.439350 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.444252 master-0 kubenswrapper[7484]: I0312 20:50:16.440490 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-tuned\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.449399 master-0 kubenswrapper[7484]: I0312 20:50:16.448892 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.457488 master-0 kubenswrapper[7484]: I0312 20:50:16.457452 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cf33c432-db42-4c6d-8ee4-f089e5bf8203-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.466881 master-0 kubenswrapper[7484]: I0312 20:50:16.459320 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hp5\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-kube-api-access-x8hp5\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.490376 master-0 kubenswrapper[7484]: I0312 20:50:16.488930 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlt7h\" (UniqueName: \"kubernetes.io/projected/52839a08-0871-44d3-9d22-b2f6b4383b99-kube-api-access-hlt7h\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.518311 master-0 kubenswrapper[7484]: I0312 20:50:16.517713 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 20:50:16.535572 master-0 kubenswrapper[7484]: I0312 20:50:16.535474 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.535572 master-0 kubenswrapper[7484]: I0312 20:50:16.535542 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b96dd10-18a0-49f8-b488-63fc2b23da39-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.535712 master-0 kubenswrapper[7484]: I0312 20:50:16.535608 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhdz\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-kube-api-access-nhhdz\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.535712 master-0 kubenswrapper[7484]: I0312 20:50:16.535634 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.535712 master-0 kubenswrapper[7484]: I0312 20:50:16.535653 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.536450 master-0 kubenswrapper[7484]: I0312 20:50:16.536012 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.536572 master-0 kubenswrapper[7484]: I0312 20:50:16.536543 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b96dd10-18a0-49f8-b488-63fc2b23da39-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.537758 master-0 kubenswrapper[7484]: I0312 20:50:16.536751 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.545799 master-0 kubenswrapper[7484]: I0312 20:50:16.545740 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.578873 master-0 kubenswrapper[7484]: I0312 20:50:16.578731 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:16.588188 master-0 kubenswrapper[7484]: I0312 20:50:16.588146 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhhdz\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-kube-api-access-nhhdz\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.622630 master-0 kubenswrapper[7484]: I0312 20:50:16.622517 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pp258"] Mar 12 20:50:16.623275 master-0 kubenswrapper[7484]: I0312 20:50:16.623233 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.624693 master-0 kubenswrapper[7484]: I0312 20:50:16.624663 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 20:50:16.625520 master-0 kubenswrapper[7484]: I0312 20:50:16.625500 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 20:50:16.625578 master-0 kubenswrapper[7484]: I0312 20:50:16.625554 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 20:50:16.625780 master-0 kubenswrapper[7484]: I0312 20:50:16.625651 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 20:50:16.640861 master-0 kubenswrapper[7484]: I0312 20:50:16.640825 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:16.655099 master-0 kubenswrapper[7484]: I0312 20:50:16.655033 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pp258"] Mar 12 20:50:16.655844 master-0 kubenswrapper[7484]: W0312 20:50:16.655790 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4d69687f_b8a5_4643_8268_ce30df5db3bc.slice/crio-052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923 WatchSource:0}: Error finding container 052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923: Status 404 returned error can't find the container with id 052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923 Mar 12 20:50:16.658379 master-0 kubenswrapper[7484]: I0312 20:50:16.658341 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 12 20:50:16.742797 master-0 kubenswrapper[7484]: I0312 20:50:16.742169 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2bmh\" (UniqueName: \"kubernetes.io/projected/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-kube-api-access-l2bmh\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.742797 master-0 kubenswrapper[7484]: I0312 20:50:16.742235 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-config-volume\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.742797 master-0 kubenswrapper[7484]: I0312 20:50:16.742293 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.813663 master-0 kubenswrapper[7484]: I0312 20:50:16.813615 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw"] Mar 12 20:50:16.845397 master-0 kubenswrapper[7484]: I0312 20:50:16.845319 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2bmh\" (UniqueName: \"kubernetes.io/projected/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-kube-api-access-l2bmh\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.845552 master-0 kubenswrapper[7484]: I0312 20:50:16.845406 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-config-volume\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.845552 master-0 kubenswrapper[7484]: I0312 20:50:16.845457 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.845683 master-0 kubenswrapper[7484]: E0312 20:50:16.845617 7484 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 12 20:50:16.845683 master-0 kubenswrapper[7484]: E0312 20:50:16.845679 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls podName:31747c5d-7e29-4a74-b8d5-3d8efa5e900b nodeName:}" failed. No retries permitted until 2026-03-12 20:50:17.345663095 +0000 UTC m=+29.830931897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls") pod "dns-default-pp258" (UID: "31747c5d-7e29-4a74-b8d5-3d8efa5e900b") : secret "dns-default-metrics-tls" not found Mar 12 20:50:16.847990 master-0 kubenswrapper[7484]: I0312 20:50:16.847934 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-config-volume\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.893073 master-0 kubenswrapper[7484]: I0312 20:50:16.892934 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2bmh\" (UniqueName: \"kubernetes.io/projected/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-kube-api-access-l2bmh\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:16.915712 master-0 kubenswrapper[7484]: I0312 20:50:16.915543 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n"] Mar 12 20:50:16.964051 master-0 kubenswrapper[7484]: W0312 20:50:16.963231 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b96dd10_18a0_49f8_b488_63fc2b23da39.slice/crio-b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3 WatchSource:0}: Error finding container b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3: Status 404 returned error can't find the container with id b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3 Mar 12 20:50:17.050553 master-0 kubenswrapper[7484]: I0312 20:50:17.050153 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9t4hh"] Mar 12 20:50:17.051097 master-0 kubenswrapper[7484]: I0312 20:50:17.051075 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.127453 master-0 kubenswrapper[7484]: I0312 20:50:17.125872 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d6659f685-v5vf6"] Mar 12 20:50:17.127453 master-0 kubenswrapper[7484]: I0312 20:50:17.126428 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.129838 master-0 kubenswrapper[7484]: I0312 20:50:17.129497 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 20:50:17.129838 master-0 kubenswrapper[7484]: I0312 20:50:17.129636 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 20:50:17.130851 master-0 kubenswrapper[7484]: I0312 20:50:17.130224 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 20:50:17.130851 master-0 kubenswrapper[7484]: I0312 20:50:17.130335 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 20:50:17.131139 master-0 kubenswrapper[7484]: I0312 20:50:17.131097 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 20:50:17.135026 master-0 kubenswrapper[7484]: I0312 20:50:17.134994 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 20:50:17.140558 master-0 kubenswrapper[7484]: I0312 20:50:17.140336 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d6659f685-v5vf6"] Mar 12 20:50:17.154846 master-0 kubenswrapper[7484]: I0312 20:50:17.154687 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-hosts-file\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.154846 master-0 kubenswrapper[7484]: I0312 20:50:17.154761 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmzz\" (UniqueName: \"kubernetes.io/projected/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-kube-api-access-vcmzz\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255377 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-client-ca\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255430 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-hosts-file\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255477 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-config\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255498 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f59015c-1312-4c6b-9870-de426ad52bc8-serving-cert\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255519 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmzz\" (UniqueName: \"kubernetes.io/projected/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-kube-api-access-vcmzz\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255538 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8lx\" (UniqueName: \"kubernetes.io/projected/0f59015c-1312-4c6b-9870-de426ad52bc8-kube-api-access-vm8lx\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255629 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-proxy-ca-bundles\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.259711 master-0 kubenswrapper[7484]: I0312 20:50:17.255744 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-hosts-file\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.290828 master-0 kubenswrapper[7484]: I0312 20:50:17.284599 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmzz\" (UniqueName: \"kubernetes.io/projected/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-kube-api-access-vcmzz\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.348522 master-0 kubenswrapper[7484]: I0312 20:50:17.348453 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a35e2486-4d5e-43e5-89c0-c562002717bb","Type":"ContainerStarted","Data":"a6b8b068d61d9dd724915057535283b9904d114374ac0759be8070deebe9ff86"} Mar 12 20:50:17.356924 master-0 kubenswrapper[7484]: I0312 20:50:17.356873 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-proxy-ca-bundles\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.356924 master-0 kubenswrapper[7484]: I0312 20:50:17.356934 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-client-ca\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.357127 master-0 kubenswrapper[7484]: I0312 20:50:17.356974 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-config\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.357127 master-0 kubenswrapper[7484]: I0312 20:50:17.356994 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f59015c-1312-4c6b-9870-de426ad52bc8-serving-cert\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.357127 master-0 kubenswrapper[7484]: I0312 20:50:17.357015 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8lx\" (UniqueName: \"kubernetes.io/projected/0f59015c-1312-4c6b-9870-de426ad52bc8-kube-api-access-vm8lx\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.357127 master-0 kubenswrapper[7484]: I0312 20:50:17.357058 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:17.360457 master-0 kubenswrapper[7484]: I0312 20:50:17.358562 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-config\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.360457 master-0 kubenswrapper[7484]: I0312 20:50:17.359538 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-proxy-ca-bundles\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.360457 master-0 kubenswrapper[7484]: I0312 20:50:17.360061 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-client-ca\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.375835 master-0 kubenswrapper[7484]: I0312 20:50:17.361448 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerStarted","Data":"60173c0f9984162f24ad65c25f3ae119353e5fb646ea28da5079828f5c237197"} Mar 12 20:50:17.375835 master-0 kubenswrapper[7484]: I0312 20:50:17.361694 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerStarted","Data":"b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3"} Mar 12 20:50:17.375835 master-0 kubenswrapper[7484]: I0312 20:50:17.363262 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 20:50:17.376772 master-0 kubenswrapper[7484]: I0312 20:50:17.376746 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f59015c-1312-4c6b-9870-de426ad52bc8-serving-cert\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.383830 master-0 kubenswrapper[7484]: I0312 20:50:17.378190 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9t4hh" Mar 12 20:50:17.383830 master-0 kubenswrapper[7484]: I0312 20:50:17.380103 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-btxk2" event={"ID":"52839a08-0871-44d3-9d22-b2f6b4383b99","Type":"ContainerStarted","Data":"b8a39bb4e1d632c2cd6d87a2f95e09bc6c8580064dcdd144d12a0b18c48441ac"} Mar 12 20:50:17.383830 master-0 kubenswrapper[7484]: I0312 20:50:17.380167 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-btxk2" event={"ID":"52839a08-0871-44d3-9d22-b2f6b4383b99","Type":"ContainerStarted","Data":"1b62e4b3aff9cc1f8f3d50e3f34ed61a650dc5580fc623a8a894631886f948ab"} Mar 12 20:50:17.393058 master-0 kubenswrapper[7484]: I0312 20:50:17.387631 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8lx\" (UniqueName: \"kubernetes.io/projected/0f59015c-1312-4c6b-9870-de426ad52bc8-kube-api-access-vm8lx\") pod \"controller-manager-5d6659f685-v5vf6\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.393058 master-0 kubenswrapper[7484]: I0312 20:50:17.389442 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerStarted","Data":"43569dfc922430e6bd267a95f8021d687d4d62fae45fe429dd06793fd1419ff6"} Mar 12 20:50:17.393058 master-0 kubenswrapper[7484]: I0312 20:50:17.389497 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerStarted","Data":"9c3da632c5f18897e9ef4fc639ad267aa15c88d97788e82ab67a1bdff6b3ccb6"} Mar 12 20:50:17.393058 master-0 kubenswrapper[7484]: I0312 20:50:17.391586 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4d69687f-b8a5-4643-8268-ce30df5db3bc","Type":"ContainerStarted","Data":"53a1a855e95809da5db41ddc57b03bad15e98992f9948ca3ac283e20c3052783"} Mar 12 20:50:17.393058 master-0 kubenswrapper[7484]: I0312 20:50:17.391616 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4d69687f-b8a5-4643-8268-ce30df5db3bc","Type":"ContainerStarted","Data":"052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923"} Mar 12 20:50:17.420342 master-0 kubenswrapper[7484]: I0312 20:50:17.413984 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-btxk2" podStartSLOduration=1.4139468929999999 podStartE2EDuration="1.413946893s" podCreationTimestamp="2026-03-12 20:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:17.408735241 +0000 UTC m=+29.894004053" watchObservedRunningTime="2026-03-12 20:50:17.413946893 +0000 UTC m=+29.899215685" Mar 12 20:50:17.461102 master-0 kubenswrapper[7484]: I0312 20:50:17.456003 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:17.547201 master-0 kubenswrapper[7484]: I0312 20:50:17.547148 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pp258" Mar 12 20:50:17.760846 master-0 kubenswrapper[7484]: I0312 20:50:17.755514 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=1.755494366 podStartE2EDuration="1.755494366s" podCreationTimestamp="2026-03-12 20:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:17.430200554 +0000 UTC m=+29.915469356" watchObservedRunningTime="2026-03-12 20:50:17.755494366 +0000 UTC m=+30.240763188" Mar 12 20:50:17.760846 master-0 kubenswrapper[7484]: I0312 20:50:17.759182 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d6659f685-v5vf6"] Mar 12 20:50:17.903522 master-0 kubenswrapper[7484]: I0312 20:50:17.900425 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pp258"] Mar 12 20:50:17.922644 master-0 kubenswrapper[7484]: W0312 20:50:17.922433 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31747c5d_7e29_4a74_b8d5_3d8efa5e900b.slice/crio-d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5 WatchSource:0}: Error finding container d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5: Status 404 returned error can't find the container with id d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5 Mar 12 20:50:18.396407 master-0 kubenswrapper[7484]: I0312 20:50:18.396087 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" event={"ID":"0f59015c-1312-4c6b-9870-de426ad52bc8","Type":"ContainerStarted","Data":"b66ca2a58cda7fee672cfd544fbb9b288feec97fbc12fdb3c7d9f9d8bddd5735"} Mar 12 20:50:18.397872 master-0 kubenswrapper[7484]: I0312 20:50:18.397670 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerStarted","Data":"f2511ee6a585ce311cd524c29ce1e349ba18deb64a2518fdb20cd96791df398a"} Mar 12 20:50:18.398489 master-0 kubenswrapper[7484]: I0312 20:50:18.398101 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:18.401446 master-0 kubenswrapper[7484]: I0312 20:50:18.401065 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9t4hh" event={"ID":"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce","Type":"ContainerStarted","Data":"e8e7e315ba461be696ceaaa5e653dca6a79074101d4303c319b20e7628a962f0"} Mar 12 20:50:18.401446 master-0 kubenswrapper[7484]: I0312 20:50:18.401100 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9t4hh" event={"ID":"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce","Type":"ContainerStarted","Data":"8792e1c546b62b1a483dc750f90553c923da596394a484fb6a82db67b2323633"} Mar 12 20:50:18.403159 master-0 kubenswrapper[7484]: I0312 20:50:18.403125 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerStarted","Data":"5932e7f75755d53b1d311f0b9e66cf21d66d861e9615083a39ac924565528bfd"} Mar 12 20:50:18.403622 master-0 kubenswrapper[7484]: I0312 20:50:18.403586 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:18.405367 master-0 kubenswrapper[7484]: I0312 20:50:18.405336 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pp258" event={"ID":"31747c5d-7e29-4a74-b8d5-3d8efa5e900b","Type":"ContainerStarted","Data":"d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5"} Mar 12 20:50:18.413217 master-0 kubenswrapper[7484]: I0312 20:50:18.413130 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podStartSLOduration=2.4130801330000002 podStartE2EDuration="2.413080133s" podCreationTimestamp="2026-03-12 20:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:18.412047653 +0000 UTC m=+30.897316455" watchObservedRunningTime="2026-03-12 20:50:18.413080133 +0000 UTC m=+30.898348945" Mar 12 20:50:18.518273 master-0 kubenswrapper[7484]: I0312 20:50:18.518169 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podStartSLOduration=2.51814846 podStartE2EDuration="2.51814846s" podCreationTimestamp="2026-03-12 20:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:18.515712599 +0000 UTC m=+31.000981391" watchObservedRunningTime="2026-03-12 20:50:18.51814846 +0000 UTC m=+31.003417262" Mar 12 20:50:18.518273 master-0 kubenswrapper[7484]: I0312 20:50:18.518341 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9t4hh" podStartSLOduration=1.518337205 podStartE2EDuration="1.518337205s" podCreationTimestamp="2026-03-12 20:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:18.461392744 +0000 UTC m=+30.946661576" watchObservedRunningTime="2026-03-12 20:50:18.518337205 +0000 UTC m=+31.003606007" Mar 12 20:50:19.778254 master-0 kubenswrapper[7484]: I0312 20:50:19.778201 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 20:50:19.779082 master-0 kubenswrapper[7484]: I0312 20:50:19.778421 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="a35e2486-4d5e-43e5-89c0-c562002717bb" containerName="installer" containerID="cri-o://a6b8b068d61d9dd724915057535283b9904d114374ac0759be8070deebe9ff86" gracePeriod=30 Mar 12 20:50:20.640507 master-0 kubenswrapper[7484]: I0312 20:50:20.640433 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:50:20.640868 master-0 kubenswrapper[7484]: I0312 20:50:20.640527 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:50:20.641069 master-0 kubenswrapper[7484]: I0312 20:50:20.640960 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:50:20.641069 master-0 kubenswrapper[7484]: I0312 20:50:20.641014 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:50:20.642226 master-0 kubenswrapper[7484]: I0312 20:50:20.641228 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:50:20.642226 master-0 kubenswrapper[7484]: I0312 20:50:20.641311 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:50:20.642226 master-0 kubenswrapper[7484]: I0312 20:50:20.641371 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:50:20.648171 master-0 kubenswrapper[7484]: I0312 20:50:20.647672 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:50:20.648171 master-0 kubenswrapper[7484]: I0312 20:50:20.647754 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:50:20.648171 master-0 kubenswrapper[7484]: I0312 20:50:20.648134 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:50:20.648325 master-0 kubenswrapper[7484]: I0312 20:50:20.648245 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:50:20.648325 master-0 kubenswrapper[7484]: I0312 20:50:20.648281 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:50:20.650187 master-0 kubenswrapper[7484]: I0312 20:50:20.650130 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"multus-admission-controller-8d675b596-98j9w\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:50:20.650445 master-0 kubenswrapper[7484]: I0312 20:50:20.650411 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:50:20.764319 master-0 kubenswrapper[7484]: I0312 20:50:20.764257 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 20:50:20.764582 master-0 kubenswrapper[7484]: I0312 20:50:20.764288 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:50:20.764799 master-0 kubenswrapper[7484]: I0312 20:50:20.764777 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:50:20.766274 master-0 kubenswrapper[7484]: I0312 20:50:20.766193 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:50:20.766695 master-0 kubenswrapper[7484]: I0312 20:50:20.766615 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:50:20.789424 master-0 kubenswrapper[7484]: I0312 20:50:20.789394 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 20:50:20.802294 master-0 kubenswrapper[7484]: I0312 20:50:20.802255 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 20:50:22.177886 master-0 kubenswrapper[7484]: I0312 20:50:22.176243 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 20:50:22.177886 master-0 kubenswrapper[7484]: I0312 20:50:22.177194 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.220797 master-0 kubenswrapper[7484]: I0312 20:50:22.188462 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 20:50:22.269704 master-0 kubenswrapper[7484]: I0312 20:50:22.269657 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.269704 master-0 kubenswrapper[7484]: I0312 20:50:22.269706 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.270091 master-0 kubenswrapper[7484]: I0312 20:50:22.269758 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-var-lock\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.371072 master-0 kubenswrapper[7484]: I0312 20:50:22.370962 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-var-lock\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.371314 master-0 kubenswrapper[7484]: I0312 20:50:22.371093 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-var-lock\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.371314 master-0 kubenswrapper[7484]: I0312 20:50:22.371127 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.371314 master-0 kubenswrapper[7484]: I0312 20:50:22.371157 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.371459 master-0 kubenswrapper[7484]: I0312 20:50:22.371335 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.388519 master-0 kubenswrapper[7484]: I0312 20:50:22.388463 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.541440 master-0 kubenswrapper[7484]: I0312 20:50:22.541327 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:22.586214 master-0 kubenswrapper[7484]: I0312 20:50:22.586128 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 20:50:24.199634 master-0 kubenswrapper[7484]: I0312 20:50:24.187870 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 20:50:24.199634 master-0 kubenswrapper[7484]: I0312 20:50:24.198874 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-98j9w"] Mar 12 20:50:24.207847 master-0 kubenswrapper[7484]: I0312 20:50:24.203884 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt"] Mar 12 20:50:24.207847 master-0 kubenswrapper[7484]: I0312 20:50:24.203942 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw"] Mar 12 20:50:24.207847 master-0 kubenswrapper[7484]: I0312 20:50:24.205912 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk"] Mar 12 20:50:24.222621 master-0 kubenswrapper[7484]: I0312 20:50:24.221231 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8"] Mar 12 20:50:24.415792 master-0 kubenswrapper[7484]: I0312 20:50:24.415750 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4"] Mar 12 20:50:24.438390 master-0 kubenswrapper[7484]: I0312 20:50:24.437919 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" event={"ID":"98d99166-c42a-4169-87e8-4209570aec50","Type":"ContainerStarted","Data":"a1961e84ee3c3ec3f1933eb0bcae9c2d6f72599a10fb64dc194d15bf1b838126"} Mar 12 20:50:24.440441 master-0 kubenswrapper[7484]: I0312 20:50:24.440382 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" event={"ID":"54184647-6e9a-43f7-90b1-5d8815f8b1ab","Type":"ContainerStarted","Data":"9bca44d12fa9a760d7165a2ac9ec92b27352a71ff9e364264bb39836d32b6ac9"} Mar 12 20:50:24.440441 master-0 kubenswrapper[7484]: I0312 20:50:24.440437 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" event={"ID":"54184647-6e9a-43f7-90b1-5d8815f8b1ab","Type":"ContainerStarted","Data":"ce789d8b3134f292701ad6a9879595b336f1a9ddf70665a346e7b380d821900d"} Mar 12 20:50:24.441376 master-0 kubenswrapper[7484]: I0312 20:50:24.441343 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" event={"ID":"07330030-487d-4fa6-b5c3-67607355bbba","Type":"ContainerStarted","Data":"8436e30f10a58f1975835cc423f1f4b55df282dbfa2eb60a4b2dbe459e6cb442"} Mar 12 20:50:24.447726 master-0 kubenswrapper[7484]: I0312 20:50:24.447684 7484 generic.go:334] "Generic (PLEG): container finished" podID="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" containerID="63062433342e426f59b2ec0520cb717a967985a843175b969c1cc95d8f71e8d3" exitCode=0 Mar 12 20:50:24.447852 master-0 kubenswrapper[7484]: I0312 20:50:24.447766 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" event={"ID":"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d","Type":"ContainerDied","Data":"63062433342e426f59b2ec0520cb717a967985a843175b969c1cc95d8f71e8d3"} Mar 12 20:50:24.449220 master-0 kubenswrapper[7484]: I0312 20:50:24.449186 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" event={"ID":"02649264-040a-41a6-9a41-8bf6416c68ff","Type":"ContainerStarted","Data":"a9ba476328193f4cef8e964926dcec3d1d9ce3f4dd043deca9d859ee90a08d2e"} Mar 12 20:50:24.454199 master-0 kubenswrapper[7484]: I0312 20:50:24.453974 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-brdcd"] Mar 12 20:50:24.454199 master-0 kubenswrapper[7484]: I0312 20:50:24.454017 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" event={"ID":"03748a30-dc0a-4804-b653-12ddc3cfb90b","Type":"ContainerStarted","Data":"516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9"} Mar 12 20:50:24.454927 master-0 kubenswrapper[7484]: I0312 20:50:24.454884 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:24.457624 master-0 kubenswrapper[7484]: I0312 20:50:24.457579 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" event={"ID":"f8f4400c-474c-480f-b46c-cf7c80555004","Type":"ContainerStarted","Data":"6f74a5945277c25b1d774a22e71b44578b23381c826557245d1753c0354bdea6"} Mar 12 20:50:24.458771 master-0 kubenswrapper[7484]: I0312 20:50:24.458697 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" event={"ID":"e624e623-6d59-444d-b548-165fa5fd2581","Type":"ContainerStarted","Data":"abeff81e503300fd28292fa3a775f0ca878a822311085f8ea3036c4d769c1e10"} Mar 12 20:50:24.460061 master-0 kubenswrapper[7484]: I0312 20:50:24.460034 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pp258" event={"ID":"31747c5d-7e29-4a74-b8d5-3d8efa5e900b","Type":"ContainerStarted","Data":"d1cb5848efc8cec6ef00ee9a0c2c112c344a7cdd8f4ba7fc2057b8dad2abcf6c"} Mar 12 20:50:24.460061 master-0 kubenswrapper[7484]: I0312 20:50:24.460061 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pp258" event={"ID":"31747c5d-7e29-4a74-b8d5-3d8efa5e900b","Type":"ContainerStarted","Data":"fb1ffe54ce081ce0d49131ae5af1e3779be357b000ef2f1eaf60019825b5c5c6"} Mar 12 20:50:24.460910 master-0 kubenswrapper[7484]: I0312 20:50:24.460889 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pp258" Mar 12 20:50:24.461373 master-0 kubenswrapper[7484]: I0312 20:50:24.461349 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:24.462443 master-0 kubenswrapper[7484]: I0312 20:50:24.462394 7484 generic.go:334] "Generic (PLEG): container finished" podID="36bd483b-292e-4e82-99d6-daa612cd385a" containerID="267a64486f8cbc2e49d6948157350cf49703f8760c6b07509071b5afa54518d3" exitCode=0 Mar 12 20:50:24.462526 master-0 kubenswrapper[7484]: I0312 20:50:24.462459 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" event={"ID":"36bd483b-292e-4e82-99d6-daa612cd385a","Type":"ContainerDied","Data":"267a64486f8cbc2e49d6948157350cf49703f8760c6b07509071b5afa54518d3"} Mar 12 20:50:24.469295 master-0 kubenswrapper[7484]: I0312 20:50:24.469232 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" event={"ID":"0f59015c-1312-4c6b-9870-de426ad52bc8","Type":"ContainerStarted","Data":"5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952"} Mar 12 20:50:24.471323 master-0 kubenswrapper[7484]: I0312 20:50:24.469854 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:24.476382 master-0 kubenswrapper[7484]: I0312 20:50:24.476337 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d7112e2f-17a5-4d98-b410-fb9d9461e8d2","Type":"ContainerStarted","Data":"82d297a9ce5daf847e4e2fbf19739e9cce03ee1eb2c97f5119a66d117ecf9649"} Mar 12 20:50:24.478896 master-0 kubenswrapper[7484]: I0312 20:50:24.478137 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:24.485732 master-0 kubenswrapper[7484]: I0312 20:50:24.485647 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pp258" podStartSLOduration=2.707561601 podStartE2EDuration="8.485628008s" podCreationTimestamp="2026-03-12 20:50:16 +0000 UTC" firstStartedPulling="2026-03-12 20:50:17.92395721 +0000 UTC m=+30.409226012" lastFinishedPulling="2026-03-12 20:50:23.702023617 +0000 UTC m=+36.187292419" observedRunningTime="2026-03-12 20:50:24.484832125 +0000 UTC m=+36.970100957" watchObservedRunningTime="2026-03-12 20:50:24.485628008 +0000 UTC m=+36.970896810" Mar 12 20:50:24.514054 master-0 kubenswrapper[7484]: I0312 20:50:24.511755 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" podStartSLOduration=5.694603826 podStartE2EDuration="11.511731825s" podCreationTimestamp="2026-03-12 20:50:13 +0000 UTC" firstStartedPulling="2026-03-12 20:50:17.791117309 +0000 UTC m=+30.276386131" lastFinishedPulling="2026-03-12 20:50:23.608245328 +0000 UTC m=+36.093514130" observedRunningTime="2026-03-12 20:50:24.505010801 +0000 UTC m=+36.990279623" watchObservedRunningTime="2026-03-12 20:50:24.511731825 +0000 UTC m=+36.997000627" Mar 12 20:50:24.543790 master-0 kubenswrapper[7484]: I0312 20:50:24.543345 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" podStartSLOduration=4.272119101 podStartE2EDuration="11.543331061s" podCreationTimestamp="2026-03-12 20:50:13 +0000 UTC" firstStartedPulling="2026-03-12 20:50:15.644284371 +0000 UTC m=+28.129553173" lastFinishedPulling="2026-03-12 20:50:22.915496291 +0000 UTC m=+35.400765133" observedRunningTime="2026-03-12 20:50:24.542183078 +0000 UTC m=+37.027451880" watchObservedRunningTime="2026-03-12 20:50:24.543331061 +0000 UTC m=+37.028599863" Mar 12 20:50:25.485476 master-0 kubenswrapper[7484]: I0312 20:50:25.484603 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" event={"ID":"36bd483b-292e-4e82-99d6-daa612cd385a","Type":"ContainerStarted","Data":"11f56d3c6b276de22b11d2c7f85f1e553db400c0cc9e255e81b629524d5a11f7"} Mar 12 20:50:25.488303 master-0 kubenswrapper[7484]: I0312 20:50:25.488141 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-brdcd" event={"ID":"c8660437-633f-4132-8a61-fe998abb493e","Type":"ContainerStarted","Data":"2367b2036b6ee449144934121f0846ae9e3677f2ee334526852b810631391c36"} Mar 12 20:50:25.490940 master-0 kubenswrapper[7484]: I0312 20:50:25.490881 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d7112e2f-17a5-4d98-b410-fb9d9461e8d2","Type":"ContainerStarted","Data":"b9b8234c9d90d3b5fbdf126478ee7b3289630b60bd0893e5d9d337aa7564482c"} Mar 12 20:50:25.494535 master-0 kubenswrapper[7484]: I0312 20:50:25.494200 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" event={"ID":"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d","Type":"ContainerStarted","Data":"5f29c7388b551efaef377ac71d58c4587b7aaba4316afbb780ca0c015ea5940d"} Mar 12 20:50:25.494535 master-0 kubenswrapper[7484]: I0312 20:50:25.494226 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" event={"ID":"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d","Type":"ContainerStarted","Data":"185fc818691b916494ba99b4f9b0b9a6eecf8de4568aed7987d2c55590a17b8f"} Mar 12 20:50:25.525403 master-0 kubenswrapper[7484]: I0312 20:50:25.525320 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" podStartSLOduration=4.330829458 podStartE2EDuration="11.525300783s" podCreationTimestamp="2026-03-12 20:50:14 +0000 UTC" firstStartedPulling="2026-03-12 20:50:15.72117078 +0000 UTC m=+28.206439582" lastFinishedPulling="2026-03-12 20:50:22.915642085 +0000 UTC m=+35.400910907" observedRunningTime="2026-03-12 20:50:25.506306333 +0000 UTC m=+37.991575155" watchObservedRunningTime="2026-03-12 20:50:25.525300783 +0000 UTC m=+38.010569585" Mar 12 20:50:25.525626 master-0 kubenswrapper[7484]: I0312 20:50:25.525566 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" podStartSLOduration=13.258404548 podStartE2EDuration="20.525562221s" podCreationTimestamp="2026-03-12 20:50:05 +0000 UTC" firstStartedPulling="2026-03-12 20:50:15.648875874 +0000 UTC m=+28.134144676" lastFinishedPulling="2026-03-12 20:50:22.916033547 +0000 UTC m=+35.401302349" observedRunningTime="2026-03-12 20:50:25.525261502 +0000 UTC m=+38.010530304" watchObservedRunningTime="2026-03-12 20:50:25.525562221 +0000 UTC m=+38.010831023" Mar 12 20:50:25.544751 master-0 kubenswrapper[7484]: I0312 20:50:25.544693 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=3.544678745 podStartE2EDuration="3.544678745s" podCreationTimestamp="2026-03-12 20:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:25.544389217 +0000 UTC m=+38.029658019" watchObservedRunningTime="2026-03-12 20:50:25.544678745 +0000 UTC m=+38.029947547" Mar 12 20:50:26.584669 master-0 kubenswrapper[7484]: I0312 20:50:26.584594 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:50:26.652051 master-0 kubenswrapper[7484]: I0312 20:50:26.649434 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:50:27.502580 master-0 kubenswrapper[7484]: I0312 20:50:27.502513 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:27.502944 master-0 kubenswrapper[7484]: I0312 20:50:27.502858 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: I0312 20:50:27.509782 7484 patch_prober.go:28] interesting pod/apiserver-84fb785f4-kl52q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]log ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]etcd ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/generic-apiserver-start-informers ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/max-in-flight-filter ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/project.openshift.io-projectcache ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/openshift.io-startinformers ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 12 20:50:27.510071 master-0 kubenswrapper[7484]: livez check failed Mar 12 20:50:27.510504 master-0 kubenswrapper[7484]: I0312 20:50:27.510063 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" podUID="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:50:28.491500 master-0 kubenswrapper[7484]: I0312 20:50:28.491222 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 20:50:28.492121 master-0 kubenswrapper[7484]: I0312 20:50:28.491752 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:28.495116 master-0 kubenswrapper[7484]: I0312 20:50:28.494722 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 20:50:28.618837 master-0 kubenswrapper[7484]: I0312 20:50:28.618724 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 20:50:29.232827 master-0 kubenswrapper[7484]: I0312 20:50:29.230529 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.232827 master-0 kubenswrapper[7484]: I0312 20:50:29.230633 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kube-api-access\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.232827 master-0 kubenswrapper[7484]: I0312 20:50:29.230678 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-var-lock\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.338829 master-0 kubenswrapper[7484]: I0312 20:50:29.335299 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kube-api-access\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.338829 master-0 kubenswrapper[7484]: I0312 20:50:29.335384 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-var-lock\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.338829 master-0 kubenswrapper[7484]: I0312 20:50:29.335460 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.338829 master-0 kubenswrapper[7484]: I0312 20:50:29.335535 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.338829 master-0 kubenswrapper[7484]: I0312 20:50:29.335878 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-var-lock\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.344571 master-0 kubenswrapper[7484]: I0312 20:50:29.343293 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 20:50:29.344571 master-0 kubenswrapper[7484]: I0312 20:50:29.343987 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.348936 master-0 kubenswrapper[7484]: I0312 20:50:29.347042 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 20:50:29.417858 master-0 kubenswrapper[7484]: I0312 20:50:29.413083 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 20:50:29.423841 master-0 kubenswrapper[7484]: I0312 20:50:29.420072 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kube-api-access\") pod \"installer-1-master-0\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.428772 master-0 kubenswrapper[7484]: I0312 20:50:29.426799 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:50:29.438297 master-0 kubenswrapper[7484]: I0312 20:50:29.435999 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.438297 master-0 kubenswrapper[7484]: I0312 20:50:29.436089 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.438297 master-0 kubenswrapper[7484]: I0312 20:50:29.436125 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-var-lock\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.537718 master-0 kubenswrapper[7484]: I0312 20:50:29.537257 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.537718 master-0 kubenswrapper[7484]: I0312 20:50:29.537354 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.537718 master-0 kubenswrapper[7484]: I0312 20:50:29.537390 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-var-lock\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.537718 master-0 kubenswrapper[7484]: I0312 20:50:29.537484 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-var-lock\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.537718 master-0 kubenswrapper[7484]: I0312 20:50:29.537528 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.570968 master-0 kubenswrapper[7484]: I0312 20:50:29.568007 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:29.698841 master-0 kubenswrapper[7484]: I0312 20:50:29.697302 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl"] Mar 12 20:50:29.698841 master-0 kubenswrapper[7484]: I0312 20:50:29.697528 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" podUID="1a307172-f010-4bad-a3fc-31607574b069" containerName="cluster-version-operator" containerID="cri-o://23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e" gracePeriod=130 Mar 12 20:50:29.705752 master-0 kubenswrapper[7484]: I0312 20:50:29.705703 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:50:30.182453 master-0 kubenswrapper[7484]: I0312 20:50:30.182390 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 20:50:30.182831 master-0 kubenswrapper[7484]: I0312 20:50:30.182750 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="d7112e2f-17a5-4d98-b410-fb9d9461e8d2" containerName="installer" containerID="cri-o://b9b8234c9d90d3b5fbdf126478ee7b3289630b60bd0893e5d9d337aa7564482c" gracePeriod=30 Mar 12 20:50:30.324962 master-0 kubenswrapper[7484]: I0312 20:50:30.320183 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:30.324962 master-0 kubenswrapper[7484]: I0312 20:50:30.323533 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:30.337443 master-0 kubenswrapper[7484]: I0312 20:50:30.337195 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:30.552518 master-0 kubenswrapper[7484]: I0312 20:50:30.549184 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 20:50:31.473888 master-0 kubenswrapper[7484]: I0312 20:50:31.473840 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:50:31.573641 master-0 kubenswrapper[7484]: I0312 20:50:31.570126 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_d7112e2f-17a5-4d98-b410-fb9d9461e8d2/installer/0.log" Mar 12 20:50:31.573641 master-0 kubenswrapper[7484]: I0312 20:50:31.570170 7484 generic.go:334] "Generic (PLEG): container finished" podID="d7112e2f-17a5-4d98-b410-fb9d9461e8d2" containerID="b9b8234c9d90d3b5fbdf126478ee7b3289630b60bd0893e5d9d337aa7564482c" exitCode=1 Mar 12 20:50:31.573641 master-0 kubenswrapper[7484]: I0312 20:50:31.570223 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d7112e2f-17a5-4d98-b410-fb9d9461e8d2","Type":"ContainerDied","Data":"b9b8234c9d90d3b5fbdf126478ee7b3289630b60bd0893e5d9d337aa7564482c"} Mar 12 20:50:31.593240 master-0 kubenswrapper[7484]: I0312 20:50:31.589038 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d6659f685-v5vf6"] Mar 12 20:50:31.593240 master-0 kubenswrapper[7484]: I0312 20:50:31.589273 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" podUID="0f59015c-1312-4c6b-9870-de426ad52bc8" containerName="controller-manager" containerID="cri-o://5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952" gracePeriod=30 Mar 12 20:50:31.606201 master-0 kubenswrapper[7484]: I0312 20:50:31.606162 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") pod \"1a307172-f010-4bad-a3fc-31607574b069\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " Mar 12 20:50:31.606389 master-0 kubenswrapper[7484]: I0312 20:50:31.606228 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") pod \"1a307172-f010-4bad-a3fc-31607574b069\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " Mar 12 20:50:31.606389 master-0 kubenswrapper[7484]: I0312 20:50:31.606258 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") pod \"1a307172-f010-4bad-a3fc-31607574b069\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " Mar 12 20:50:31.606389 master-0 kubenswrapper[7484]: I0312 20:50:31.606279 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") pod \"1a307172-f010-4bad-a3fc-31607574b069\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " Mar 12 20:50:31.606389 master-0 kubenswrapper[7484]: I0312 20:50:31.606297 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") pod \"1a307172-f010-4bad-a3fc-31607574b069\" (UID: \"1a307172-f010-4bad-a3fc-31607574b069\") " Mar 12 20:50:31.606599 master-0 kubenswrapper[7484]: I0312 20:50:31.606560 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "1a307172-f010-4bad-a3fc-31607574b069" (UID: "1a307172-f010-4bad-a3fc-31607574b069"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:31.611113 master-0 kubenswrapper[7484]: I0312 20:50:31.606935 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "1a307172-f010-4bad-a3fc-31607574b069" (UID: "1a307172-f010-4bad-a3fc-31607574b069"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:31.625983 master-0 kubenswrapper[7484]: I0312 20:50:31.618110 7484 generic.go:334] "Generic (PLEG): container finished" podID="1a307172-f010-4bad-a3fc-31607574b069" containerID="23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e" exitCode=0 Mar 12 20:50:31.625983 master-0 kubenswrapper[7484]: I0312 20:50:31.618201 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" Mar 12 20:50:31.625983 master-0 kubenswrapper[7484]: I0312 20:50:31.618327 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" event={"ID":"1a307172-f010-4bad-a3fc-31607574b069","Type":"ContainerDied","Data":"23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e"} Mar 12 20:50:31.625983 master-0 kubenswrapper[7484]: I0312 20:50:31.618373 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl" event={"ID":"1a307172-f010-4bad-a3fc-31607574b069","Type":"ContainerDied","Data":"a8cc5f9e5cee5d74f6994e756dde73b1668f4705c942563115821df2efd277cf"} Mar 12 20:50:31.625983 master-0 kubenswrapper[7484]: I0312 20:50:31.618401 7484 scope.go:117] "RemoveContainer" containerID="23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e" Mar 12 20:50:31.625983 master-0 kubenswrapper[7484]: I0312 20:50:31.618980 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca" (OuterVolumeSpecName: "service-ca") pod "1a307172-f010-4bad-a3fc-31607574b069" (UID: "1a307172-f010-4bad-a3fc-31607574b069"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:31.692455 master-0 kubenswrapper[7484]: I0312 20:50:31.682699 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a307172-f010-4bad-a3fc-31607574b069" (UID: "1a307172-f010-4bad-a3fc-31607574b069"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:31.692455 master-0 kubenswrapper[7484]: I0312 20:50:31.683244 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a307172-f010-4bad-a3fc-31607574b069" (UID: "1a307172-f010-4bad-a3fc-31607574b069"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:31.692455 master-0 kubenswrapper[7484]: I0312 20:50:31.684030 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl"] Mar 12 20:50:31.692455 master-0 kubenswrapper[7484]: I0312 20:50:31.692219 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" podUID="03748a30-dc0a-4804-b653-12ddc3cfb90b" containerName="route-controller-manager" containerID="cri-o://516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9" gracePeriod=30 Mar 12 20:50:31.714143 master-0 kubenswrapper[7484]: I0312 20:50:31.707894 7484 scope.go:117] "RemoveContainer" containerID="23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e" Mar 12 20:50:31.714143 master-0 kubenswrapper[7484]: E0312 20:50:31.708248 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e\": container with ID starting with 23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e not found: ID does not exist" containerID="23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e" Mar 12 20:50:31.714143 master-0 kubenswrapper[7484]: I0312 20:50:31.708273 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e"} err="failed to get container status \"23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e\": rpc error: code = NotFound desc = could not find container \"23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e\": container with ID starting with 23ae5af3ec50031824696b7d04e8e15e4b08545207e52bcdac99d821e85a768e not found: ID does not exist" Mar 12 20:50:31.715620 master-0 kubenswrapper[7484]: I0312 20:50:31.715579 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a307172-f010-4bad-a3fc-31607574b069-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:31.715729 master-0 kubenswrapper[7484]: I0312 20:50:31.715712 7484 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1a307172-f010-4bad-a3fc-31607574b069-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:31.715729 master-0 kubenswrapper[7484]: I0312 20:50:31.715727 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a307172-f010-4bad-a3fc-31607574b069-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:31.715825 master-0 kubenswrapper[7484]: I0312 20:50:31.715737 7484 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:31.715825 master-0 kubenswrapper[7484]: I0312 20:50:31.715766 7484 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1a307172-f010-4bad-a3fc-31607574b069-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:31.864818 master-0 kubenswrapper[7484]: I0312 20:50:31.864446 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_d7112e2f-17a5-4d98-b410-fb9d9461e8d2/installer/0.log" Mar 12 20:50:31.864818 master-0 kubenswrapper[7484]: I0312 20:50:31.864504 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:31.937847 master-0 kubenswrapper[7484]: I0312 20:50:31.937778 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl"] Mar 12 20:50:31.944551 master-0 kubenswrapper[7484]: I0312 20:50:31.944088 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-wddgl"] Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: I0312 20:50:31.994076 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd"] Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: E0312 20:50:31.994450 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a307172-f010-4bad-a3fc-31607574b069" containerName="cluster-version-operator" Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: I0312 20:50:31.994466 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a307172-f010-4bad-a3fc-31607574b069" containerName="cluster-version-operator" Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: E0312 20:50:31.994478 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7112e2f-17a5-4d98-b410-fb9d9461e8d2" containerName="installer" Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: I0312 20:50:31.994486 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7112e2f-17a5-4d98-b410-fb9d9461e8d2" containerName="installer" Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: I0312 20:50:31.994680 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a307172-f010-4bad-a3fc-31607574b069" containerName="cluster-version-operator" Mar 12 20:50:31.995211 master-0 kubenswrapper[7484]: I0312 20:50:31.994702 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7112e2f-17a5-4d98-b410-fb9d9461e8d2" containerName="installer" Mar 12 20:50:31.997672 master-0 kubenswrapper[7484]: I0312 20:50:31.997635 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.017869 master-0 kubenswrapper[7484]: I0312 20:50:32.017823 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 20:50:32.019355 master-0 kubenswrapper[7484]: I0312 20:50:32.018501 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 20:50:32.019355 master-0 kubenswrapper[7484]: I0312 20:50:32.019065 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-var-lock\") pod \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " Mar 12 20:50:32.019355 master-0 kubenswrapper[7484]: I0312 20:50:32.019173 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kube-api-access\") pod \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " Mar 12 20:50:32.019355 master-0 kubenswrapper[7484]: I0312 20:50:32.019243 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kubelet-dir\") pod \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\" (UID: \"d7112e2f-17a5-4d98-b410-fb9d9461e8d2\") " Mar 12 20:50:32.019508 master-0 kubenswrapper[7484]: I0312 20:50:32.019246 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 20:50:32.019587 master-0 kubenswrapper[7484]: I0312 20:50:32.019334 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7112e2f-17a5-4d98-b410-fb9d9461e8d2" (UID: "d7112e2f-17a5-4d98-b410-fb9d9461e8d2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:32.019694 master-0 kubenswrapper[7484]: I0312 20:50:32.019293 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-var-lock" (OuterVolumeSpecName: "var-lock") pod "d7112e2f-17a5-4d98-b410-fb9d9461e8d2" (UID: "d7112e2f-17a5-4d98-b410-fb9d9461e8d2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:32.039527 master-0 kubenswrapper[7484]: I0312 20:50:32.039391 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 20:50:32.039527 master-0 kubenswrapper[7484]: I0312 20:50:32.039388 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7112e2f-17a5-4d98-b410-fb9d9461e8d2" (UID: "d7112e2f-17a5-4d98-b410-fb9d9461e8d2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:32.039827 master-0 kubenswrapper[7484]: I0312 20:50:32.039782 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.039890 master-0 kubenswrapper[7484]: I0312 20:50:32.039843 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.039890 master-0 kubenswrapper[7484]: I0312 20:50:32.039855 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7112e2f-17a5-4d98-b410-fb9d9461e8d2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.113170 master-0 kubenswrapper[7484]: I0312 20:50:32.113119 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 12 20:50:32.143399 master-0 kubenswrapper[7484]: I0312 20:50:32.143237 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83368183-0368-44b1-9387-eed32b211988-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.143399 master-0 kubenswrapper[7484]: I0312 20:50:32.143299 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.143399 master-0 kubenswrapper[7484]: I0312 20:50:32.143340 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.143399 master-0 kubenswrapper[7484]: I0312 20:50:32.143366 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83368183-0368-44b1-9387-eed32b211988-service-ca\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.143399 master-0 kubenswrapper[7484]: I0312 20:50:32.143391 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83368183-0368-44b1-9387-eed32b211988-serving-cert\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.149218 master-0 kubenswrapper[7484]: I0312 20:50:32.149173 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:32.181443 master-0 kubenswrapper[7484]: I0312 20:50:32.181142 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:32.245314 master-0 kubenswrapper[7484]: I0312 20:50:32.245273 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-client-ca\") pod \"0f59015c-1312-4c6b-9870-de426ad52bc8\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " Mar 12 20:50:32.245776 master-0 kubenswrapper[7484]: I0312 20:50:32.245363 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-config\") pod \"0f59015c-1312-4c6b-9870-de426ad52bc8\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " Mar 12 20:50:32.245776 master-0 kubenswrapper[7484]: I0312 20:50:32.245406 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8lx\" (UniqueName: \"kubernetes.io/projected/0f59015c-1312-4c6b-9870-de426ad52bc8-kube-api-access-vm8lx\") pod \"0f59015c-1312-4c6b-9870-de426ad52bc8\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " Mar 12 20:50:32.245776 master-0 kubenswrapper[7484]: I0312 20:50:32.245433 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f59015c-1312-4c6b-9870-de426ad52bc8-serving-cert\") pod \"0f59015c-1312-4c6b-9870-de426ad52bc8\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " Mar 12 20:50:32.246434 master-0 kubenswrapper[7484]: I0312 20:50:32.246090 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0f59015c-1312-4c6b-9870-de426ad52bc8" (UID: "0f59015c-1312-4c6b-9870-de426ad52bc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:32.246554 master-0 kubenswrapper[7484]: I0312 20:50:32.246531 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-config" (OuterVolumeSpecName: "config") pod "0f59015c-1312-4c6b-9870-de426ad52bc8" (UID: "0f59015c-1312-4c6b-9870-de426ad52bc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:32.246640 master-0 kubenswrapper[7484]: I0312 20:50:32.246595 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-proxy-ca-bundles\") pod \"0f59015c-1312-4c6b-9870-de426ad52bc8\" (UID: \"0f59015c-1312-4c6b-9870-de426ad52bc8\") " Mar 12 20:50:32.248303 master-0 kubenswrapper[7484]: I0312 20:50:32.248253 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0f59015c-1312-4c6b-9870-de426ad52bc8" (UID: "0f59015c-1312-4c6b-9870-de426ad52bc8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.249966 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250055 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250093 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83368183-0368-44b1-9387-eed32b211988-service-ca\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250135 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83368183-0368-44b1-9387-eed32b211988-serving-cert\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250159 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83368183-0368-44b1-9387-eed32b211988-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250236 7484 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250248 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250258 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f59015c-1312-4c6b-9870-de426ad52bc8-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250551 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.250585 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.252516 master-0 kubenswrapper[7484]: I0312 20:50:32.252450 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f59015c-1312-4c6b-9870-de426ad52bc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0f59015c-1312-4c6b-9870-de426ad52bc8" (UID: "0f59015c-1312-4c6b-9870-de426ad52bc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:32.254744 master-0 kubenswrapper[7484]: I0312 20:50:32.254702 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83368183-0368-44b1-9387-eed32b211988-serving-cert\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.262930 master-0 kubenswrapper[7484]: I0312 20:50:32.256097 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f59015c-1312-4c6b-9870-de426ad52bc8-kube-api-access-vm8lx" (OuterVolumeSpecName: "kube-api-access-vm8lx") pod "0f59015c-1312-4c6b-9870-de426ad52bc8" (UID: "0f59015c-1312-4c6b-9870-de426ad52bc8"). InnerVolumeSpecName "kube-api-access-vm8lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:32.272584 master-0 kubenswrapper[7484]: I0312 20:50:32.272510 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83368183-0368-44b1-9387-eed32b211988-service-ca\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.274332 master-0 kubenswrapper[7484]: I0312 20:50:32.274301 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83368183-0368-44b1-9387-eed32b211988-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.351051 master-0 kubenswrapper[7484]: I0312 20:50:32.351001 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddrwj\" (UniqueName: \"kubernetes.io/projected/03748a30-dc0a-4804-b653-12ddc3cfb90b-kube-api-access-ddrwj\") pod \"03748a30-dc0a-4804-b653-12ddc3cfb90b\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " Mar 12 20:50:32.351275 master-0 kubenswrapper[7484]: I0312 20:50:32.351064 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03748a30-dc0a-4804-b653-12ddc3cfb90b-serving-cert\") pod \"03748a30-dc0a-4804-b653-12ddc3cfb90b\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " Mar 12 20:50:32.351275 master-0 kubenswrapper[7484]: I0312 20:50:32.351092 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-config\") pod \"03748a30-dc0a-4804-b653-12ddc3cfb90b\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " Mar 12 20:50:32.351275 master-0 kubenswrapper[7484]: I0312 20:50:32.351134 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-client-ca\") pod \"03748a30-dc0a-4804-b653-12ddc3cfb90b\" (UID: \"03748a30-dc0a-4804-b653-12ddc3cfb90b\") " Mar 12 20:50:32.352355 master-0 kubenswrapper[7484]: I0312 20:50:32.351336 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8lx\" (UniqueName: \"kubernetes.io/projected/0f59015c-1312-4c6b-9870-de426ad52bc8-kube-api-access-vm8lx\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.352355 master-0 kubenswrapper[7484]: I0312 20:50:32.351347 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f59015c-1312-4c6b-9870-de426ad52bc8-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.352355 master-0 kubenswrapper[7484]: I0312 20:50:32.351686 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-client-ca" (OuterVolumeSpecName: "client-ca") pod "03748a30-dc0a-4804-b653-12ddc3cfb90b" (UID: "03748a30-dc0a-4804-b653-12ddc3cfb90b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:32.359402 master-0 kubenswrapper[7484]: I0312 20:50:32.359345 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-config" (OuterVolumeSpecName: "config") pod "03748a30-dc0a-4804-b653-12ddc3cfb90b" (UID: "03748a30-dc0a-4804-b653-12ddc3cfb90b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:50:32.359595 master-0 kubenswrapper[7484]: I0312 20:50:32.359565 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 20:50:32.365120 master-0 kubenswrapper[7484]: I0312 20:50:32.362073 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03748a30-dc0a-4804-b653-12ddc3cfb90b-kube-api-access-ddrwj" (OuterVolumeSpecName: "kube-api-access-ddrwj") pod "03748a30-dc0a-4804-b653-12ddc3cfb90b" (UID: "03748a30-dc0a-4804-b653-12ddc3cfb90b"). InnerVolumeSpecName "kube-api-access-ddrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:32.365120 master-0 kubenswrapper[7484]: I0312 20:50:32.364124 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03748a30-dc0a-4804-b653-12ddc3cfb90b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03748a30-dc0a-4804-b653-12ddc3cfb90b" (UID: "03748a30-dc0a-4804-b653-12ddc3cfb90b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:50:32.454394 master-0 kubenswrapper[7484]: I0312 20:50:32.454324 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.454394 master-0 kubenswrapper[7484]: I0312 20:50:32.454357 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddrwj\" (UniqueName: \"kubernetes.io/projected/03748a30-dc0a-4804-b653-12ddc3cfb90b-kube-api-access-ddrwj\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.454394 master-0 kubenswrapper[7484]: I0312 20:50:32.454367 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03748a30-dc0a-4804-b653-12ddc3cfb90b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.454394 master-0 kubenswrapper[7484]: I0312 20:50:32.454376 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03748a30-dc0a-4804-b653-12ddc3cfb90b-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:32.469720 master-0 kubenswrapper[7484]: W0312 20:50:32.469668 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83368183_0368_44b1_9387_eed32b211988.slice/crio-6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81 WatchSource:0}: Error finding container 6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81: Status 404 returned error can't find the container with id 6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81 Mar 12 20:50:32.511925 master-0 kubenswrapper[7484]: I0312 20:50:32.511414 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:32.521524 master-0 kubenswrapper[7484]: I0312 20:50:32.521250 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 20:50:32.633383 master-0 kubenswrapper[7484]: I0312 20:50:32.632129 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" event={"ID":"02649264-040a-41a6-9a41-8bf6416c68ff","Type":"ContainerStarted","Data":"2b4db6bfae7d3a6dc44d3409d9f6ab9a2cebbffc5b0c457c2c52619a0694cf6d"} Mar 12 20:50:32.633912 master-0 kubenswrapper[7484]: I0312 20:50:32.633846 7484 generic.go:334] "Generic (PLEG): container finished" podID="0f59015c-1312-4c6b-9870-de426ad52bc8" containerID="5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952" exitCode=0 Mar 12 20:50:32.633912 master-0 kubenswrapper[7484]: I0312 20:50:32.633898 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" event={"ID":"0f59015c-1312-4c6b-9870-de426ad52bc8","Type":"ContainerDied","Data":"5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952"} Mar 12 20:50:32.633982 master-0 kubenswrapper[7484]: I0312 20:50:32.633920 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" event={"ID":"0f59015c-1312-4c6b-9870-de426ad52bc8","Type":"ContainerDied","Data":"b66ca2a58cda7fee672cfd544fbb9b288feec97fbc12fdb3c7d9f9d8bddd5735"} Mar 12 20:50:32.633982 master-0 kubenswrapper[7484]: I0312 20:50:32.633942 7484 scope.go:117] "RemoveContainer" containerID="5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952" Mar 12 20:50:32.638471 master-0 kubenswrapper[7484]: I0312 20:50:32.634047 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d6659f685-v5vf6" Mar 12 20:50:32.638471 master-0 kubenswrapper[7484]: I0312 20:50:32.638260 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-brdcd" event={"ID":"c8660437-633f-4132-8a61-fe998abb493e","Type":"ContainerStarted","Data":"1df82d04267fbf3effc4dc2adafee15220cca32ae5c253d92c04cd4cb612adbb"} Mar 12 20:50:32.641642 master-0 kubenswrapper[7484]: I0312 20:50:32.641603 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" event={"ID":"e624e623-6d59-444d-b548-165fa5fd2581","Type":"ContainerStarted","Data":"2d7932f9200cfcc46a818b87f2e758dc323d7be1734436d6a1a8927b3aea1adf"} Mar 12 20:50:32.642127 master-0 kubenswrapper[7484]: I0312 20:50:32.641909 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:50:32.645436 master-0 kubenswrapper[7484]: I0312 20:50:32.645414 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_d7112e2f-17a5-4d98-b410-fb9d9461e8d2/installer/0.log" Mar 12 20:50:32.645872 master-0 kubenswrapper[7484]: I0312 20:50:32.645526 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:50:32.645872 master-0 kubenswrapper[7484]: I0312 20:50:32.645577 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:50:32.645872 master-0 kubenswrapper[7484]: I0312 20:50:32.645536 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d7112e2f-17a5-4d98-b410-fb9d9461e8d2","Type":"ContainerDied","Data":"82d297a9ce5daf847e4e2fbf19739e9cce03ee1eb2c97f5119a66d117ecf9649"} Mar 12 20:50:32.645872 master-0 kubenswrapper[7484]: I0312 20:50:32.645540 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 12 20:50:32.665102 master-0 kubenswrapper[7484]: I0312 20:50:32.660289 7484 generic.go:334] "Generic (PLEG): container finished" podID="03748a30-dc0a-4804-b653-12ddc3cfb90b" containerID="516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9" exitCode=0 Mar 12 20:50:32.665102 master-0 kubenswrapper[7484]: I0312 20:50:32.660340 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" event={"ID":"03748a30-dc0a-4804-b653-12ddc3cfb90b","Type":"ContainerDied","Data":"516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9"} Mar 12 20:50:32.665102 master-0 kubenswrapper[7484]: I0312 20:50:32.660427 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" event={"ID":"03748a30-dc0a-4804-b653-12ddc3cfb90b","Type":"ContainerDied","Data":"89842820602b3f72aeb63fe6d750da0cc64cd69ab229df72a18b8463d012ba5f"} Mar 12 20:50:32.665102 master-0 kubenswrapper[7484]: I0312 20:50:32.660476 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl" Mar 12 20:50:32.681389 master-0 kubenswrapper[7484]: I0312 20:50:32.681216 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" event={"ID":"f8f4400c-474c-480f-b46c-cf7c80555004","Type":"ContainerStarted","Data":"5d43c250b5491225f8ee7e26898d34d724cb99521d528bed5880450148f60c8b"} Mar 12 20:50:32.681389 master-0 kubenswrapper[7484]: I0312 20:50:32.681258 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" event={"ID":"f8f4400c-474c-480f-b46c-cf7c80555004","Type":"ContainerStarted","Data":"f354e2ce5026487f56a9c2480c5f171a3fa137d3fef2ad82947d875089621462"} Mar 12 20:50:32.726440 master-0 kubenswrapper[7484]: I0312 20:50:32.724155 7484 scope.go:117] "RemoveContainer" containerID="5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952" Mar 12 20:50:32.726440 master-0 kubenswrapper[7484]: I0312 20:50:32.724197 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5bec49ae-0c52-451f-8d8d-6e822cd335cc","Type":"ContainerStarted","Data":"2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151"} Mar 12 20:50:32.726440 master-0 kubenswrapper[7484]: I0312 20:50:32.724247 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5bec49ae-0c52-451f-8d8d-6e822cd335cc","Type":"ContainerStarted","Data":"98878f1c22a55e47341e985f394158eb059ac971b614446c313279ea87ff3ce0"} Mar 12 20:50:32.737443 master-0 kubenswrapper[7484]: E0312 20:50:32.731998 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952\": container with ID starting with 5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952 not found: ID does not exist" containerID="5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952" Mar 12 20:50:32.737443 master-0 kubenswrapper[7484]: I0312 20:50:32.732053 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952"} err="failed to get container status \"5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952\": rpc error: code = NotFound desc = could not find container \"5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952\": container with ID starting with 5dd76bde522b9612a90b2b6c87a13b0073c9811145c431602802b1917a910952 not found: ID does not exist" Mar 12 20:50:32.737443 master-0 kubenswrapper[7484]: I0312 20:50:32.732082 7484 scope.go:117] "RemoveContainer" containerID="b9b8234c9d90d3b5fbdf126478ee7b3289630b60bd0893e5d9d337aa7564482c" Mar 12 20:50:32.737443 master-0 kubenswrapper[7484]: I0312 20:50:32.736181 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"869e3d2a-1b5c-426f-945a-ddd44a9a5033","Type":"ContainerStarted","Data":"57edb20a691b07071028f2edb064ac37f76c164057bb37d7d87a25a08a74d8a6"} Mar 12 20:50:32.747139 master-0 kubenswrapper[7484]: I0312 20:50:32.747068 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" event={"ID":"83368183-0368-44b1-9387-eed32b211988","Type":"ContainerStarted","Data":"6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81"} Mar 12 20:50:32.761506 master-0 kubenswrapper[7484]: I0312 20:50:32.761360 7484 scope.go:117] "RemoveContainer" containerID="516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9" Mar 12 20:50:32.802658 master-0 kubenswrapper[7484]: I0312 20:50:32.802436 7484 scope.go:117] "RemoveContainer" containerID="516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9" Mar 12 20:50:32.805293 master-0 kubenswrapper[7484]: I0312 20:50:32.805238 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 20:50:32.805889 master-0 kubenswrapper[7484]: E0312 20:50:32.805866 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03748a30-dc0a-4804-b653-12ddc3cfb90b" containerName="route-controller-manager" Mar 12 20:50:32.805937 master-0 kubenswrapper[7484]: I0312 20:50:32.805920 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="03748a30-dc0a-4804-b653-12ddc3cfb90b" containerName="route-controller-manager" Mar 12 20:50:32.805976 master-0 kubenswrapper[7484]: E0312 20:50:32.805951 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f59015c-1312-4c6b-9870-de426ad52bc8" containerName="controller-manager" Mar 12 20:50:32.805976 master-0 kubenswrapper[7484]: I0312 20:50:32.805960 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f59015c-1312-4c6b-9870-de426ad52bc8" containerName="controller-manager" Mar 12 20:50:32.806224 master-0 kubenswrapper[7484]: I0312 20:50:32.806195 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="03748a30-dc0a-4804-b653-12ddc3cfb90b" containerName="route-controller-manager" Mar 12 20:50:32.806268 master-0 kubenswrapper[7484]: I0312 20:50:32.806221 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f59015c-1312-4c6b-9870-de426ad52bc8" containerName="controller-manager" Mar 12 20:50:32.806882 master-0 kubenswrapper[7484]: I0312 20:50:32.806855 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:32.807619 master-0 kubenswrapper[7484]: I0312 20:50:32.807589 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d6659f685-v5vf6"] Mar 12 20:50:32.816368 master-0 kubenswrapper[7484]: I0312 20:50:32.816308 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-dhrfh" Mar 12 20:50:32.824297 master-0 kubenswrapper[7484]: E0312 20:50:32.822644 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9\": container with ID starting with 516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9 not found: ID does not exist" containerID="516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9" Mar 12 20:50:32.824297 master-0 kubenswrapper[7484]: I0312 20:50:32.822724 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9"} err="failed to get container status \"516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9\": rpc error: code = NotFound desc = could not find container \"516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9\": container with ID starting with 516e4a439d4615c02b2e1f89cc6ec93653e5c23c90d5801def6ddace2b3370c9 not found: ID does not exist" Mar 12 20:50:32.824297 master-0 kubenswrapper[7484]: I0312 20:50:32.822779 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d6659f685-v5vf6"] Mar 12 20:50:32.840838 master-0 kubenswrapper[7484]: I0312 20:50:32.840094 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 20:50:32.845471 master-0 kubenswrapper[7484]: I0312 20:50:32.845437 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 20:50:32.851426 master-0 kubenswrapper[7484]: I0312 20:50:32.851332 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 12 20:50:32.851515 master-0 kubenswrapper[7484]: I0312 20:50:32.851471 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl"] Mar 12 20:50:32.851669 master-0 kubenswrapper[7484]: I0312 20:50:32.851631 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8884dcfd-psljl"] Mar 12 20:50:32.860736 master-0 kubenswrapper[7484]: I0312 20:50:32.860653 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.860628503 podStartE2EDuration="3.860628503s" podCreationTimestamp="2026-03-12 20:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:32.859852021 +0000 UTC m=+45.345120823" watchObservedRunningTime="2026-03-12 20:50:32.860628503 +0000 UTC m=+45.345897295" Mar 12 20:50:32.963390 master-0 kubenswrapper[7484]: I0312 20:50:32.962419 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-var-lock\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:32.963390 master-0 kubenswrapper[7484]: I0312 20:50:32.962492 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:32.963390 master-0 kubenswrapper[7484]: I0312 20:50:32.962532 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.066348 master-0 kubenswrapper[7484]: I0312 20:50:33.065177 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-var-lock\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.066348 master-0 kubenswrapper[7484]: I0312 20:50:33.065229 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.066348 master-0 kubenswrapper[7484]: I0312 20:50:33.065256 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.066348 master-0 kubenswrapper[7484]: I0312 20:50:33.065329 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.066348 master-0 kubenswrapper[7484]: I0312 20:50:33.065364 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-var-lock\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.083477 master-0 kubenswrapper[7484]: I0312 20:50:33.083429 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.145913 master-0 kubenswrapper[7484]: I0312 20:50:33.145779 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:33.244970 master-0 kubenswrapper[7484]: I0312 20:50:33.241842 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs"] Mar 12 20:50:33.244970 master-0 kubenswrapper[7484]: I0312 20:50:33.242437 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.244970 master-0 kubenswrapper[7484]: I0312 20:50:33.243175 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86"] Mar 12 20:50:33.244970 master-0 kubenswrapper[7484]: I0312 20:50:33.243713 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.246247 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.246555 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.246724 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.248371 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.248649 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.248865 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.249516 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.249652 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 20:50:33.250405 master-0 kubenswrapper[7484]: I0312 20:50:33.249785 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 20:50:33.250713 master-0 kubenswrapper[7484]: I0312 20:50:33.250489 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 20:50:33.252405 master-0 kubenswrapper[7484]: I0312 20:50:33.252379 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 20:50:33.274358 master-0 kubenswrapper[7484]: I0312 20:50:33.274080 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs"] Mar 12 20:50:33.276180 master-0 kubenswrapper[7484]: I0312 20:50:33.276146 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86"] Mar 12 20:50:33.368542 master-0 kubenswrapper[7484]: I0312 20:50:33.368488 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkv8\" (UniqueName: \"kubernetes.io/projected/6d28f095-032b-47d4-b808-1502deeffee5-kube-api-access-bfkv8\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.368542 master-0 kubenswrapper[7484]: I0312 20:50:33.368546 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-proxy-ca-bundles\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368570 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ab546f-a3fa-44dc-9c83-30a376880f14-serving-cert\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368591 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-config\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368607 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d28f095-032b-47d4-b808-1502deeffee5-serving-cert\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368624 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-client-ca\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368637 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-config\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368663 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-client-ca\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.368759 master-0 kubenswrapper[7484]: I0312 20:50:33.368684 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrjr\" (UniqueName: \"kubernetes.io/projected/b6ab546f-a3fa-44dc-9c83-30a376880f14-kube-api-access-gwrjr\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.470335 master-0 kubenswrapper[7484]: I0312 20:50:33.470270 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-client-ca\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.470335 master-0 kubenswrapper[7484]: I0312 20:50:33.470341 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrjr\" (UniqueName: \"kubernetes.io/projected/b6ab546f-a3fa-44dc-9c83-30a376880f14-kube-api-access-gwrjr\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.470600 master-0 kubenswrapper[7484]: I0312 20:50:33.470531 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfkv8\" (UniqueName: \"kubernetes.io/projected/6d28f095-032b-47d4-b808-1502deeffee5-kube-api-access-bfkv8\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.470648 master-0 kubenswrapper[7484]: I0312 20:50:33.470621 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-proxy-ca-bundles\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.470648 master-0 kubenswrapper[7484]: I0312 20:50:33.470644 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ab546f-a3fa-44dc-9c83-30a376880f14-serving-cert\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.470727 master-0 kubenswrapper[7484]: I0312 20:50:33.470683 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-config\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.470727 master-0 kubenswrapper[7484]: I0312 20:50:33.470709 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d28f095-032b-47d4-b808-1502deeffee5-serving-cert\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.470837 master-0 kubenswrapper[7484]: I0312 20:50:33.470737 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-client-ca\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.470837 master-0 kubenswrapper[7484]: I0312 20:50:33.470752 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-config\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.471438 master-0 kubenswrapper[7484]: I0312 20:50:33.471402 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-client-ca\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.472148 master-0 kubenswrapper[7484]: I0312 20:50:33.472116 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-proxy-ca-bundles\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.472404 master-0 kubenswrapper[7484]: I0312 20:50:33.472380 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-client-ca\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.472497 master-0 kubenswrapper[7484]: I0312 20:50:33.472405 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-config\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.473066 master-0 kubenswrapper[7484]: I0312 20:50:33.473016 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-config\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.476183 master-0 kubenswrapper[7484]: I0312 20:50:33.476113 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ab546f-a3fa-44dc-9c83-30a376880f14-serving-cert\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.476521 master-0 kubenswrapper[7484]: I0312 20:50:33.476481 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d28f095-032b-47d4-b808-1502deeffee5-serving-cert\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.584067 master-0 kubenswrapper[7484]: I0312 20:50:33.584013 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfkv8\" (UniqueName: \"kubernetes.io/projected/6d28f095-032b-47d4-b808-1502deeffee5-kube-api-access-bfkv8\") pod \"controller-manager-6dfdd9fb89-wjn86\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.584325 master-0 kubenswrapper[7484]: I0312 20:50:33.584278 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrjr\" (UniqueName: \"kubernetes.io/projected/b6ab546f-a3fa-44dc-9c83-30a376880f14-kube-api-access-gwrjr\") pod \"route-controller-manager-657bd6d846-tffzs\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.586695 master-0 kubenswrapper[7484]: I0312 20:50:33.586650 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:33.603743 master-0 kubenswrapper[7484]: I0312 20:50:33.603245 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:33.640016 master-0 kubenswrapper[7484]: I0312 20:50:33.639960 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 20:50:33.650286 master-0 kubenswrapper[7484]: W0312 20:50:33.649770 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod22780dc9_2961_4b5f_aa74_d76ff4f888f6.slice/crio-211b963c9aab036e80333f855e64a6822a01dad5e2544e01958111cc75e2717b WatchSource:0}: Error finding container 211b963c9aab036e80333f855e64a6822a01dad5e2544e01958111cc75e2717b: Status 404 returned error can't find the container with id 211b963c9aab036e80333f855e64a6822a01dad5e2544e01958111cc75e2717b Mar 12 20:50:33.763986 master-0 kubenswrapper[7484]: I0312 20:50:33.763903 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03748a30-dc0a-4804-b653-12ddc3cfb90b" path="/var/lib/kubelet/pods/03748a30-dc0a-4804-b653-12ddc3cfb90b/volumes" Mar 12 20:50:33.764703 master-0 kubenswrapper[7484]: I0312 20:50:33.764662 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f59015c-1312-4c6b-9870-de426ad52bc8" path="/var/lib/kubelet/pods/0f59015c-1312-4c6b-9870-de426ad52bc8/volumes" Mar 12 20:50:33.765741 master-0 kubenswrapper[7484]: I0312 20:50:33.765698 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a307172-f010-4bad-a3fc-31607574b069" path="/var/lib/kubelet/pods/1a307172-f010-4bad-a3fc-31607574b069/volumes" Mar 12 20:50:33.766919 master-0 kubenswrapper[7484]: I0312 20:50:33.766879 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7112e2f-17a5-4d98-b410-fb9d9461e8d2" path="/var/lib/kubelet/pods/d7112e2f-17a5-4d98-b410-fb9d9461e8d2/volumes" Mar 12 20:50:33.783779 master-0 kubenswrapper[7484]: I0312 20:50:33.783722 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-brdcd" event={"ID":"c8660437-633f-4132-8a61-fe998abb493e","Type":"ContainerStarted","Data":"356e7a3d9d8829df2080f2733eed2ef3109d8b0825fab6560f753fc5398cfe48"} Mar 12 20:50:33.790437 master-0 kubenswrapper[7484]: I0312 20:50:33.790375 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"869e3d2a-1b5c-426f-945a-ddd44a9a5033","Type":"ContainerStarted","Data":"36bfe1f3ee1124371de60181a0f2b9f61930c3b4af0a3a9413b95d937717a871"} Mar 12 20:50:33.803074 master-0 kubenswrapper[7484]: I0312 20:50:33.802655 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" event={"ID":"83368183-0368-44b1-9387-eed32b211988","Type":"ContainerStarted","Data":"60c092d90b91d6c8d0848adbbe0eb73f3519357eaad109095cf0374d92826012"} Mar 12 20:50:33.855925 master-0 kubenswrapper[7484]: I0312 20:50:33.855741 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=5.855709776 podStartE2EDuration="5.855709776s" podCreationTimestamp="2026-03-12 20:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:33.84655189 +0000 UTC m=+46.331820692" watchObservedRunningTime="2026-03-12 20:50:33.855709776 +0000 UTC m=+46.340978578" Mar 12 20:50:33.856791 master-0 kubenswrapper[7484]: I0312 20:50:33.856756 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"22780dc9-2961-4b5f-aa74-d76ff4f888f6","Type":"ContainerStarted","Data":"211b963c9aab036e80333f855e64a6822a01dad5e2544e01958111cc75e2717b"} Mar 12 20:50:33.881206 master-0 kubenswrapper[7484]: I0312 20:50:33.881119 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" podStartSLOduration=2.8810771109999997 podStartE2EDuration="2.881077111s" podCreationTimestamp="2026-03-12 20:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:33.878147546 +0000 UTC m=+46.363416348" watchObservedRunningTime="2026-03-12 20:50:33.881077111 +0000 UTC m=+46.366345913" Mar 12 20:50:33.914505 master-0 kubenswrapper[7484]: I0312 20:50:33.908247 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:50:34.045453 master-0 kubenswrapper[7484]: I0312 20:50:34.044747 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs"] Mar 12 20:50:34.055076 master-0 kubenswrapper[7484]: W0312 20:50:34.055021 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ab546f_a3fa_44dc_9c83_30a376880f14.slice/crio-7829a5473bca9b592f3720bc91d73e59b3fdfa6a34f4ddae3d51a8c7d8ecc8ba WatchSource:0}: Error finding container 7829a5473bca9b592f3720bc91d73e59b3fdfa6a34f4ddae3d51a8c7d8ecc8ba: Status 404 returned error can't find the container with id 7829a5473bca9b592f3720bc91d73e59b3fdfa6a34f4ddae3d51a8c7d8ecc8ba Mar 12 20:50:34.161638 master-0 kubenswrapper[7484]: I0312 20:50:34.161582 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86"] Mar 12 20:50:34.178769 master-0 kubenswrapper[7484]: W0312 20:50:34.178719 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d28f095_032b_47d4_b808_1502deeffee5.slice/crio-34eb9f39a103adc95e9d813da70dc873fef8ba0c9c9b46fb5eb1ecd38c9046cb WatchSource:0}: Error finding container 34eb9f39a103adc95e9d813da70dc873fef8ba0c9c9b46fb5eb1ecd38c9046cb: Status 404 returned error can't find the container with id 34eb9f39a103adc95e9d813da70dc873fef8ba0c9c9b46fb5eb1ecd38c9046cb Mar 12 20:50:34.907877 master-0 kubenswrapper[7484]: I0312 20:50:34.906942 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" event={"ID":"b6ab546f-a3fa-44dc-9c83-30a376880f14","Type":"ContainerStarted","Data":"000152bdbaa6a39e3cd6f5ab2bc3ec2c13b858332e25d8ee0b163cf10cb5a429"} Mar 12 20:50:34.907877 master-0 kubenswrapper[7484]: I0312 20:50:34.906984 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" event={"ID":"b6ab546f-a3fa-44dc-9c83-30a376880f14","Type":"ContainerStarted","Data":"7829a5473bca9b592f3720bc91d73e59b3fdfa6a34f4ddae3d51a8c7d8ecc8ba"} Mar 12 20:50:34.909906 master-0 kubenswrapper[7484]: I0312 20:50:34.909731 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:34.914919 master-0 kubenswrapper[7484]: I0312 20:50:34.914874 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"22780dc9-2961-4b5f-aa74-d76ff4f888f6","Type":"ContainerStarted","Data":"76b9e896aebc56092a680a58657ee01297cba9b5f5bb8bfcb934b0efde1b3de4"} Mar 12 20:50:34.921843 master-0 kubenswrapper[7484]: I0312 20:50:34.921448 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" event={"ID":"6d28f095-032b-47d4-b808-1502deeffee5","Type":"ContainerStarted","Data":"90f6df2cd5378a3ebab865fb719c69e38e48496ca3cd635c80da9e8ec49ce434"} Mar 12 20:50:34.921843 master-0 kubenswrapper[7484]: I0312 20:50:34.921491 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" event={"ID":"6d28f095-032b-47d4-b808-1502deeffee5","Type":"ContainerStarted","Data":"34eb9f39a103adc95e9d813da70dc873fef8ba0c9c9b46fb5eb1ecd38c9046cb"} Mar 12 20:50:34.921843 master-0 kubenswrapper[7484]: I0312 20:50:34.921505 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:34.924463 master-0 kubenswrapper[7484]: I0312 20:50:34.924418 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 20:50:34.933426 master-0 kubenswrapper[7484]: I0312 20:50:34.933360 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:50:34.933896 master-0 kubenswrapper[7484]: I0312 20:50:34.933768 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" podStartSLOduration=3.933755874 podStartE2EDuration="3.933755874s" podCreationTimestamp="2026-03-12 20:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:34.928885192 +0000 UTC m=+47.414154004" watchObservedRunningTime="2026-03-12 20:50:34.933755874 +0000 UTC m=+47.419024676" Mar 12 20:50:35.005936 master-0 kubenswrapper[7484]: I0312 20:50:35.004036 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podStartSLOduration=4.00396736 podStartE2EDuration="4.00396736s" podCreationTimestamp="2026-03-12 20:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:34.959983565 +0000 UTC m=+47.445252367" watchObservedRunningTime="2026-03-12 20:50:35.00396736 +0000 UTC m=+47.489236162" Mar 12 20:50:35.033767 master-0 kubenswrapper[7484]: I0312 20:50:35.033675 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.03365168 podStartE2EDuration="3.03365168s" podCreationTimestamp="2026-03-12 20:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:35.030227691 +0000 UTC m=+47.515496493" watchObservedRunningTime="2026-03-12 20:50:35.03365168 +0000 UTC m=+47.518920482" Mar 12 20:50:35.555248 master-0 kubenswrapper[7484]: I0312 20:50:35.551717 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pp258" Mar 12 20:50:39.978439 master-0 kubenswrapper[7484]: I0312 20:50:39.978321 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 20:50:39.978977 master-0 kubenswrapper[7484]: I0312 20:50:39.978553 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="22780dc9-2961-4b5f-aa74-d76ff4f888f6" containerName="installer" containerID="cri-o://76b9e896aebc56092a680a58657ee01297cba9b5f5bb8bfcb934b0efde1b3de4" gracePeriod=30 Mar 12 20:50:40.948229 master-0 kubenswrapper[7484]: I0312 20:50:40.948092 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_22780dc9-2961-4b5f-aa74-d76ff4f888f6/installer/0.log" Mar 12 20:50:40.948229 master-0 kubenswrapper[7484]: I0312 20:50:40.948160 7484 generic.go:334] "Generic (PLEG): container finished" podID="22780dc9-2961-4b5f-aa74-d76ff4f888f6" containerID="76b9e896aebc56092a680a58657ee01297cba9b5f5bb8bfcb934b0efde1b3de4" exitCode=1 Mar 12 20:50:40.948229 master-0 kubenswrapper[7484]: I0312 20:50:40.948196 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"22780dc9-2961-4b5f-aa74-d76ff4f888f6","Type":"ContainerDied","Data":"76b9e896aebc56092a680a58657ee01297cba9b5f5bb8bfcb934b0efde1b3de4"} Mar 12 20:50:41.545405 master-0 kubenswrapper[7484]: I0312 20:50:41.545354 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_22780dc9-2961-4b5f-aa74-d76ff4f888f6/installer/0.log" Mar 12 20:50:41.546000 master-0 kubenswrapper[7484]: I0312 20:50:41.545457 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:41.601145 master-0 kubenswrapper[7484]: I0312 20:50:41.601068 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kube-api-access\") pod \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " Mar 12 20:50:41.601355 master-0 kubenswrapper[7484]: I0312 20:50:41.601158 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kubelet-dir\") pod \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " Mar 12 20:50:41.601355 master-0 kubenswrapper[7484]: I0312 20:50:41.601191 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-var-lock\") pod \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\" (UID: \"22780dc9-2961-4b5f-aa74-d76ff4f888f6\") " Mar 12 20:50:41.601355 master-0 kubenswrapper[7484]: I0312 20:50:41.601297 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22780dc9-2961-4b5f-aa74-d76ff4f888f6" (UID: "22780dc9-2961-4b5f-aa74-d76ff4f888f6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:41.601579 master-0 kubenswrapper[7484]: I0312 20:50:41.601473 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-var-lock" (OuterVolumeSpecName: "var-lock") pod "22780dc9-2961-4b5f-aa74-d76ff4f888f6" (UID: "22780dc9-2961-4b5f-aa74-d76ff4f888f6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:41.601908 master-0 kubenswrapper[7484]: I0312 20:50:41.601864 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:41.601973 master-0 kubenswrapper[7484]: I0312 20:50:41.601908 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22780dc9-2961-4b5f-aa74-d76ff4f888f6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:41.605448 master-0 kubenswrapper[7484]: I0312 20:50:41.605377 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22780dc9-2961-4b5f-aa74-d76ff4f888f6" (UID: "22780dc9-2961-4b5f-aa74-d76ff4f888f6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:41.703531 master-0 kubenswrapper[7484]: I0312 20:50:41.703464 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22780dc9-2961-4b5f-aa74-d76ff4f888f6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:41.960000 master-0 kubenswrapper[7484]: I0312 20:50:41.959924 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_22780dc9-2961-4b5f-aa74-d76ff4f888f6/installer/0.log" Mar 12 20:50:41.960274 master-0 kubenswrapper[7484]: I0312 20:50:41.960029 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"22780dc9-2961-4b5f-aa74-d76ff4f888f6","Type":"ContainerDied","Data":"211b963c9aab036e80333f855e64a6822a01dad5e2544e01958111cc75e2717b"} Mar 12 20:50:41.960274 master-0 kubenswrapper[7484]: I0312 20:50:41.960088 7484 scope.go:117] "RemoveContainer" containerID="76b9e896aebc56092a680a58657ee01297cba9b5f5bb8bfcb934b0efde1b3de4" Mar 12 20:50:41.960274 master-0 kubenswrapper[7484]: I0312 20:50:41.960132 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 12 20:50:42.883242 master-0 kubenswrapper[7484]: I0312 20:50:42.883191 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 20:50:42.966601 master-0 kubenswrapper[7484]: I0312 20:50:42.966553 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" event={"ID":"07330030-487d-4fa6-b5c3-67607355bbba","Type":"ContainerStarted","Data":"6c56c4db0c281b527d95916d742bbc3e553116d1e81c3dd471f1a45a35455823"} Mar 12 20:50:42.966892 master-0 kubenswrapper[7484]: I0312 20:50:42.966874 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:50:42.974541 master-0 kubenswrapper[7484]: I0312 20:50:42.974485 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 20:50:43.018835 master-0 kubenswrapper[7484]: I0312 20:50:43.018750 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 12 20:50:43.366670 master-0 kubenswrapper[7484]: I0312 20:50:43.366617 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 20:50:43.366917 master-0 kubenswrapper[7484]: E0312 20:50:43.366787 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22780dc9-2961-4b5f-aa74-d76ff4f888f6" containerName="installer" Mar 12 20:50:43.366917 master-0 kubenswrapper[7484]: I0312 20:50:43.366801 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="22780dc9-2961-4b5f-aa74-d76ff4f888f6" containerName="installer" Mar 12 20:50:43.366917 master-0 kubenswrapper[7484]: I0312 20:50:43.366913 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="22780dc9-2961-4b5f-aa74-d76ff4f888f6" containerName="installer" Mar 12 20:50:43.367229 master-0 kubenswrapper[7484]: I0312 20:50:43.367204 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.386520 master-0 kubenswrapper[7484]: I0312 20:50:43.386466 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-dhrfh" Mar 12 20:50:43.520950 master-0 kubenswrapper[7484]: I0312 20:50:43.520888 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954fe7f9-e138-49ab-ab8e-504b75914100-kube-api-access\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.521132 master-0 kubenswrapper[7484]: I0312 20:50:43.520953 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.521132 master-0 kubenswrapper[7484]: I0312 20:50:43.521011 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-var-lock\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.547399 master-0 kubenswrapper[7484]: I0312 20:50:43.547347 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 20:50:43.623687 master-0 kubenswrapper[7484]: I0312 20:50:43.623170 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954fe7f9-e138-49ab-ab8e-504b75914100-kube-api-access\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.623687 master-0 kubenswrapper[7484]: I0312 20:50:43.623236 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.623687 master-0 kubenswrapper[7484]: I0312 20:50:43.623275 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-var-lock\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.623687 master-0 kubenswrapper[7484]: I0312 20:50:43.623402 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-var-lock\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.623986 master-0 kubenswrapper[7484]: I0312 20:50:43.623792 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.646963 master-0 kubenswrapper[7484]: I0312 20:50:43.646925 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954fe7f9-e138-49ab-ab8e-504b75914100-kube-api-access\") pod \"installer-4-master-0\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.689979 master-0 kubenswrapper[7484]: I0312 20:50:43.689939 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:50:43.771868 master-0 kubenswrapper[7484]: I0312 20:50:43.759607 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22780dc9-2961-4b5f-aa74-d76ff4f888f6" path="/var/lib/kubelet/pods/22780dc9-2961-4b5f-aa74-d76ff4f888f6/volumes" Mar 12 20:50:43.982978 master-0 kubenswrapper[7484]: I0312 20:50:43.982913 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" event={"ID":"98d99166-c42a-4169-87e8-4209570aec50","Type":"ContainerStarted","Data":"669075218ae9c6140a6d1a11ffe9044b67954f86f47a22c7c2a5d67c3bf0eaba"} Mar 12 20:50:43.983464 master-0 kubenswrapper[7484]: I0312 20:50:43.983275 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:50:43.986999 master-0 kubenswrapper[7484]: I0312 20:50:43.986970 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" event={"ID":"54184647-6e9a-43f7-90b1-5d8815f8b1ab","Type":"ContainerStarted","Data":"5f28ffa4da0fdb90f29b89cf60e30b4a358ce45a2cda62cc81254fb83080a074"} Mar 12 20:50:43.987110 master-0 kubenswrapper[7484]: I0312 20:50:43.987098 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:50:43.988432 master-0 kubenswrapper[7484]: I0312 20:50:43.988127 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 20:50:44.568603 master-0 kubenswrapper[7484]: I0312 20:50:44.568513 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 12 20:50:44.833581 master-0 kubenswrapper[7484]: I0312 20:50:44.833407 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jblsg"] Mar 12 20:50:44.834561 master-0 kubenswrapper[7484]: I0312 20:50:44.834519 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:44.837087 master-0 kubenswrapper[7484]: I0312 20:50:44.837050 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-w9pdx" Mar 12 20:50:44.847837 master-0 kubenswrapper[7484]: I0312 20:50:44.847763 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jblsg"] Mar 12 20:50:44.971935 master-0 kubenswrapper[7484]: I0312 20:50:44.971900 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-utilities\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:44.972122 master-0 kubenswrapper[7484]: I0312 20:50:44.971955 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzn6t\" (UniqueName: \"kubernetes.io/projected/567a9a33-1a82-4c48-b541-7e0eaae11f57-kube-api-access-nzn6t\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:44.972122 master-0 kubenswrapper[7484]: I0312 20:50:44.971980 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-catalog-content\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.005324 master-0 kubenswrapper[7484]: I0312 20:50:45.005194 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"954fe7f9-e138-49ab-ab8e-504b75914100","Type":"ContainerStarted","Data":"53ca9cb8afb78daa40b60fb8598538d996992c55bbb55bf6668f862728b14188"} Mar 12 20:50:45.020505 master-0 kubenswrapper[7484]: I0312 20:50:45.020455 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-94rll"] Mar 12 20:50:45.021549 master-0 kubenswrapper[7484]: I0312 20:50:45.021531 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.024098 master-0 kubenswrapper[7484]: I0312 20:50:45.024064 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-t5dxh" Mar 12 20:50:45.048198 master-0 kubenswrapper[7484]: I0312 20:50:45.048129 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94rll"] Mar 12 20:50:45.085488 master-0 kubenswrapper[7484]: I0312 20:50:45.085379 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzn6t\" (UniqueName: \"kubernetes.io/projected/567a9a33-1a82-4c48-b541-7e0eaae11f57-kube-api-access-nzn6t\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.085727 master-0 kubenswrapper[7484]: I0312 20:50:45.085709 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-catalog-content\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.085857 master-0 kubenswrapper[7484]: I0312 20:50:45.085839 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-utilities\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.086107 master-0 kubenswrapper[7484]: I0312 20:50:45.086037 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-catalog-content\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.086247 master-0 kubenswrapper[7484]: I0312 20:50:45.086211 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-catalog-content\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.086332 master-0 kubenswrapper[7484]: I0312 20:50:45.086313 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-utilities\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.086367 master-0 kubenswrapper[7484]: I0312 20:50:45.086346 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqfz\" (UniqueName: \"kubernetes.io/projected/4c589179-0df4-4fe8-bfdd-965c3e7652c5-kube-api-access-pbqfz\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.086939 master-0 kubenswrapper[7484]: I0312 20:50:45.086906 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-utilities\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.107003 master-0 kubenswrapper[7484]: I0312 20:50:45.106952 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzn6t\" (UniqueName: \"kubernetes.io/projected/567a9a33-1a82-4c48-b541-7e0eaae11f57-kube-api-access-nzn6t\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.167407 master-0 kubenswrapper[7484]: I0312 20:50:45.167354 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:50:45.186435 master-0 kubenswrapper[7484]: I0312 20:50:45.186384 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 20:50:45.186669 master-0 kubenswrapper[7484]: I0312 20:50:45.186552 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="5bec49ae-0c52-451f-8d8d-6e822cd335cc" containerName="installer" containerID="cri-o://2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151" gracePeriod=30 Mar 12 20:50:45.190222 master-0 kubenswrapper[7484]: I0312 20:50:45.190173 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-utilities\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.190347 master-0 kubenswrapper[7484]: I0312 20:50:45.190231 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-catalog-content\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.190347 master-0 kubenswrapper[7484]: I0312 20:50:45.190323 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqfz\" (UniqueName: \"kubernetes.io/projected/4c589179-0df4-4fe8-bfdd-965c3e7652c5-kube-api-access-pbqfz\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.190896 master-0 kubenswrapper[7484]: I0312 20:50:45.190870 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-utilities\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.191306 master-0 kubenswrapper[7484]: I0312 20:50:45.191284 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-catalog-content\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.205677 master-0 kubenswrapper[7484]: I0312 20:50:45.205640 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqfz\" (UniqueName: \"kubernetes.io/projected/4c589179-0df4-4fe8-bfdd-965c3e7652c5-kube-api-access-pbqfz\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.340925 master-0 kubenswrapper[7484]: I0312 20:50:45.339406 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:50:45.711027 master-0 kubenswrapper[7484]: I0312 20:50:45.710895 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jblsg"] Mar 12 20:50:45.716470 master-0 kubenswrapper[7484]: W0312 20:50:45.716414 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod567a9a33_1a82_4c48_b541_7e0eaae11f57.slice/crio-35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1 WatchSource:0}: Error finding container 35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1: Status 404 returned error can't find the container with id 35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1 Mar 12 20:50:45.839090 master-0 kubenswrapper[7484]: I0312 20:50:45.839052 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-94rll"] Mar 12 20:50:45.856570 master-0 kubenswrapper[7484]: I0312 20:50:45.856203 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-659d778978-djtms"] Mar 12 20:50:45.856946 master-0 kubenswrapper[7484]: I0312 20:50:45.856926 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:45.861934 master-0 kubenswrapper[7484]: I0312 20:50:45.861381 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vmm2r" Mar 12 20:50:45.861934 master-0 kubenswrapper[7484]: I0312 20:50:45.861657 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 20:50:45.875211 master-0 kubenswrapper[7484]: I0312 20:50:45.875157 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-659d778978-djtms"] Mar 12 20:50:45.905264 master-0 kubenswrapper[7484]: I0312 20:50:45.904883 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7d5\" (UniqueName: \"kubernetes.io/projected/067fdca7-c61d-470c-8421-73e0b62df3e4-kube-api-access-tm7d5\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:45.905264 master-0 kubenswrapper[7484]: I0312 20:50:45.904946 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-apiservice-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:45.905264 master-0 kubenswrapper[7484]: I0312 20:50:45.904987 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-webhook-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:45.905264 master-0 kubenswrapper[7484]: I0312 20:50:45.905024 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/067fdca7-c61d-470c-8421-73e0b62df3e4-tmpfs\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.006294 master-0 kubenswrapper[7484]: I0312 20:50:46.006238 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7d5\" (UniqueName: \"kubernetes.io/projected/067fdca7-c61d-470c-8421-73e0b62df3e4-kube-api-access-tm7d5\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.006294 master-0 kubenswrapper[7484]: I0312 20:50:46.006293 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-apiservice-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.006884 master-0 kubenswrapper[7484]: I0312 20:50:46.006314 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-webhook-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.006884 master-0 kubenswrapper[7484]: I0312 20:50:46.006709 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/067fdca7-c61d-470c-8421-73e0b62df3e4-tmpfs\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.007376 master-0 kubenswrapper[7484]: I0312 20:50:46.007328 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/067fdca7-c61d-470c-8421-73e0b62df3e4-tmpfs\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.009472 master-0 kubenswrapper[7484]: I0312 20:50:46.009435 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-apiservice-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.010134 master-0 kubenswrapper[7484]: I0312 20:50:46.010085 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-webhook-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.014344 master-0 kubenswrapper[7484]: I0312 20:50:46.014307 7484 generic.go:334] "Generic (PLEG): container finished" podID="567a9a33-1a82-4c48-b541-7e0eaae11f57" containerID="5b959eb86868abbb3911c6888fbbe4637dd94eb120d52558a304ceb3cf5d43e3" exitCode=0 Mar 12 20:50:46.014430 master-0 kubenswrapper[7484]: I0312 20:50:46.014364 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jblsg" event={"ID":"567a9a33-1a82-4c48-b541-7e0eaae11f57","Type":"ContainerDied","Data":"5b959eb86868abbb3911c6888fbbe4637dd94eb120d52558a304ceb3cf5d43e3"} Mar 12 20:50:46.014430 master-0 kubenswrapper[7484]: I0312 20:50:46.014390 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jblsg" event={"ID":"567a9a33-1a82-4c48-b541-7e0eaae11f57","Type":"ContainerStarted","Data":"35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1"} Mar 12 20:50:46.016751 master-0 kubenswrapper[7484]: I0312 20:50:46.016702 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"954fe7f9-e138-49ab-ab8e-504b75914100","Type":"ContainerStarted","Data":"41e5296df7c3d4b1110f31058e02c84e5cd9852b203025b79d16be32d4b3de88"} Mar 12 20:50:46.018981 master-0 kubenswrapper[7484]: I0312 20:50:46.018939 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94rll" event={"ID":"4c589179-0df4-4fe8-bfdd-965c3e7652c5","Type":"ContainerStarted","Data":"12893a728732446f94ca8814579a35744128ccd4319c3c765ac2be173f953384"} Mar 12 20:50:46.028751 master-0 kubenswrapper[7484]: I0312 20:50:46.028712 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7d5\" (UniqueName: \"kubernetes.io/projected/067fdca7-c61d-470c-8421-73e0b62df3e4-kube-api-access-tm7d5\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.056880 master-0 kubenswrapper[7484]: I0312 20:50:46.056702 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.056680366 podStartE2EDuration="3.056680366s" podCreationTimestamp="2026-03-12 20:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:46.053904896 +0000 UTC m=+58.539173708" watchObservedRunningTime="2026-03-12 20:50:46.056680366 +0000 UTC m=+58.541949168" Mar 12 20:50:46.191256 master-0 kubenswrapper[7484]: I0312 20:50:46.191185 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:46.425450 master-0 kubenswrapper[7484]: I0312 20:50:46.425369 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-66qvj"] Mar 12 20:50:46.426261 master-0 kubenswrapper[7484]: I0312 20:50:46.426229 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.428028 master-0 kubenswrapper[7484]: I0312 20:50:46.427982 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pvnjq" Mar 12 20:50:46.441360 master-0 kubenswrapper[7484]: I0312 20:50:46.441272 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66qvj"] Mar 12 20:50:46.515924 master-0 kubenswrapper[7484]: I0312 20:50:46.515729 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-catalog-content\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.515924 master-0 kubenswrapper[7484]: I0312 20:50:46.515859 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8qp\" (UniqueName: \"kubernetes.io/projected/d6eace9f-a52d-4570-a932-959538e1f2bc-kube-api-access-8l8qp\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.515924 master-0 kubenswrapper[7484]: I0312 20:50:46.515921 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-utilities\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.516295 master-0 kubenswrapper[7484]: I0312 20:50:46.515739 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp"] Mar 12 20:50:46.516507 master-0 kubenswrapper[7484]: I0312 20:50:46.516482 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.519299 master-0 kubenswrapper[7484]: I0312 20:50:46.519260 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-cdrqx" Mar 12 20:50:46.520038 master-0 kubenswrapper[7484]: I0312 20:50:46.519988 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 20:50:46.520255 master-0 kubenswrapper[7484]: I0312 20:50:46.520222 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 20:50:46.521063 master-0 kubenswrapper[7484]: I0312 20:50:46.521033 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 20:50:46.530503 master-0 kubenswrapper[7484]: I0312 20:50:46.529552 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp"] Mar 12 20:50:46.617648 master-0 kubenswrapper[7484]: I0312 20:50:46.617596 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddw4\" (UniqueName: \"kubernetes.io/projected/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-kube-api-access-8ddw4\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.617648 master-0 kubenswrapper[7484]: I0312 20:50:46.617663 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.617903 master-0 kubenswrapper[7484]: I0312 20:50:46.617714 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8qp\" (UniqueName: \"kubernetes.io/projected/d6eace9f-a52d-4570-a932-959538e1f2bc-kube-api-access-8l8qp\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.617903 master-0 kubenswrapper[7484]: I0312 20:50:46.617764 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-utilities\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.617903 master-0 kubenswrapper[7484]: I0312 20:50:46.617786 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-catalog-content\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.618329 master-0 kubenswrapper[7484]: I0312 20:50:46.618272 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-catalog-content\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.618636 master-0 kubenswrapper[7484]: I0312 20:50:46.618601 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-utilities\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.637831 master-0 kubenswrapper[7484]: I0312 20:50:46.637772 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8qp\" (UniqueName: \"kubernetes.io/projected/d6eace9f-a52d-4570-a932-959538e1f2bc-kube-api-access-8l8qp\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.643438 master-0 kubenswrapper[7484]: I0312 20:50:46.643388 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-659d778978-djtms"] Mar 12 20:50:46.651781 master-0 kubenswrapper[7484]: W0312 20:50:46.651721 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod067fdca7_c61d_470c_8421_73e0b62df3e4.slice/crio-edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4 WatchSource:0}: Error finding container edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4: Status 404 returned error can't find the container with id edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4 Mar 12 20:50:46.718563 master-0 kubenswrapper[7484]: I0312 20:50:46.718473 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddw4\" (UniqueName: \"kubernetes.io/projected/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-kube-api-access-8ddw4\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.718714 master-0 kubenswrapper[7484]: I0312 20:50:46.718564 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.723211 master-0 kubenswrapper[7484]: I0312 20:50:46.723161 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.745563 master-0 kubenswrapper[7484]: I0312 20:50:46.745500 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddw4\" (UniqueName: \"kubernetes.io/projected/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-kube-api-access-8ddw4\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:46.749999 master-0 kubenswrapper[7484]: I0312 20:50:46.749908 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:50:46.852489 master-0 kubenswrapper[7484]: I0312 20:50:46.852450 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 20:50:47.029587 master-0 kubenswrapper[7484]: I0312 20:50:47.029387 7484 generic.go:334] "Generic (PLEG): container finished" podID="4c589179-0df4-4fe8-bfdd-965c3e7652c5" containerID="2343eedc615ca5a68e9b6c26c7cebd6a505b4d3931d7695418b25f7d657329ac" exitCode=0 Mar 12 20:50:47.029587 master-0 kubenswrapper[7484]: I0312 20:50:47.029466 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94rll" event={"ID":"4c589179-0df4-4fe8-bfdd-965c3e7652c5","Type":"ContainerDied","Data":"2343eedc615ca5a68e9b6c26c7cebd6a505b4d3931d7695418b25f7d657329ac"} Mar 12 20:50:47.035158 master-0 kubenswrapper[7484]: I0312 20:50:47.035124 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_a35e2486-4d5e-43e5-89c0-c562002717bb/installer/0.log" Mar 12 20:50:47.035244 master-0 kubenswrapper[7484]: I0312 20:50:47.035180 7484 generic.go:334] "Generic (PLEG): container finished" podID="a35e2486-4d5e-43e5-89c0-c562002717bb" containerID="a6b8b068d61d9dd724915057535283b9904d114374ac0759be8070deebe9ff86" exitCode=1 Mar 12 20:50:47.035244 master-0 kubenswrapper[7484]: I0312 20:50:47.035237 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a35e2486-4d5e-43e5-89c0-c562002717bb","Type":"ContainerDied","Data":"a6b8b068d61d9dd724915057535283b9904d114374ac0759be8070deebe9ff86"} Mar 12 20:50:47.038431 master-0 kubenswrapper[7484]: I0312 20:50:47.038402 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" event={"ID":"067fdca7-c61d-470c-8421-73e0b62df3e4","Type":"ContainerStarted","Data":"f9732f3eb7289a87bf05e0af9fe6510252a068c32ba70d56fd5fb684deab6c9f"} Mar 12 20:50:47.038503 master-0 kubenswrapper[7484]: I0312 20:50:47.038438 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" event={"ID":"067fdca7-c61d-470c-8421-73e0b62df3e4","Type":"ContainerStarted","Data":"edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4"} Mar 12 20:50:47.038798 master-0 kubenswrapper[7484]: I0312 20:50:47.038766 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:47.079475 master-0 kubenswrapper[7484]: I0312 20:50:47.079265 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" podStartSLOduration=2.079244875 podStartE2EDuration="2.079244875s" podCreationTimestamp="2026-03-12 20:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:50:47.078510964 +0000 UTC m=+59.563779766" watchObservedRunningTime="2026-03-12 20:50:47.079244875 +0000 UTC m=+59.564513677" Mar 12 20:50:47.151687 master-0 kubenswrapper[7484]: I0312 20:50:47.151654 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp"] Mar 12 20:50:47.159272 master-0 kubenswrapper[7484]: I0312 20:50:47.159239 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_a35e2486-4d5e-43e5-89c0-c562002717bb/installer/0.log" Mar 12 20:50:47.159373 master-0 kubenswrapper[7484]: I0312 20:50:47.159321 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:47.162416 master-0 kubenswrapper[7484]: W0312 20:50:47.162370 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode03d34d0_f7c1_4dcf_8b84_89ad647cc10f.slice/crio-5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e WatchSource:0}: Error finding container 5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e: Status 404 returned error can't find the container with id 5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e Mar 12 20:50:47.197190 master-0 kubenswrapper[7484]: I0312 20:50:47.197131 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66qvj"] Mar 12 20:50:47.336047 master-0 kubenswrapper[7484]: I0312 20:50:47.335978 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-var-lock\") pod \"a35e2486-4d5e-43e5-89c0-c562002717bb\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " Mar 12 20:50:47.336126 master-0 kubenswrapper[7484]: I0312 20:50:47.336093 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-var-lock" (OuterVolumeSpecName: "var-lock") pod "a35e2486-4d5e-43e5-89c0-c562002717bb" (UID: "a35e2486-4d5e-43e5-89c0-c562002717bb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:47.336164 master-0 kubenswrapper[7484]: I0312 20:50:47.336145 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-kubelet-dir\") pod \"a35e2486-4d5e-43e5-89c0-c562002717bb\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " Mar 12 20:50:47.336244 master-0 kubenswrapper[7484]: I0312 20:50:47.336215 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a35e2486-4d5e-43e5-89c0-c562002717bb" (UID: "a35e2486-4d5e-43e5-89c0-c562002717bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:47.336282 master-0 kubenswrapper[7484]: I0312 20:50:47.336239 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35e2486-4d5e-43e5-89c0-c562002717bb-kube-api-access\") pod \"a35e2486-4d5e-43e5-89c0-c562002717bb\" (UID: \"a35e2486-4d5e-43e5-89c0-c562002717bb\") " Mar 12 20:50:47.336851 master-0 kubenswrapper[7484]: I0312 20:50:47.336747 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:47.336851 master-0 kubenswrapper[7484]: I0312 20:50:47.336774 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a35e2486-4d5e-43e5-89c0-c562002717bb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:47.341403 master-0 kubenswrapper[7484]: I0312 20:50:47.341271 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35e2486-4d5e-43e5-89c0-c562002717bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a35e2486-4d5e-43e5-89c0-c562002717bb" (UID: "a35e2486-4d5e-43e5-89c0-c562002717bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:47.436945 master-0 kubenswrapper[7484]: I0312 20:50:47.436881 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 20:50:47.437628 master-0 kubenswrapper[7484]: I0312 20:50:47.437584 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a35e2486-4d5e-43e5-89c0-c562002717bb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:48.052446 master-0 kubenswrapper[7484]: I0312 20:50:48.052207 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 20:50:48.054194 master-0 kubenswrapper[7484]: E0312 20:50:48.054156 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35e2486-4d5e-43e5-89c0-c562002717bb" containerName="installer" Mar 12 20:50:48.054194 master-0 kubenswrapper[7484]: I0312 20:50:48.054186 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35e2486-4d5e-43e5-89c0-c562002717bb" containerName="installer" Mar 12 20:50:48.054507 master-0 kubenswrapper[7484]: I0312 20:50:48.054463 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35e2486-4d5e-43e5-89c0-c562002717bb" containerName="installer" Mar 12 20:50:48.057950 master-0 kubenswrapper[7484]: I0312 20:50:48.056188 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.057950 master-0 kubenswrapper[7484]: I0312 20:50:48.056726 7484 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 20:50:48.057950 master-0 kubenswrapper[7484]: I0312 20:50:48.057422 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272" gracePeriod=30 Mar 12 20:50:48.057950 master-0 kubenswrapper[7484]: I0312 20:50:48.057520 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63" gracePeriod=30 Mar 12 20:50:48.063502 master-0 kubenswrapper[7484]: I0312 20:50:48.061441 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-xq8cf" Mar 12 20:50:48.066845 master-0 kubenswrapper[7484]: I0312 20:50:48.066789 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 20:50:48.067147 master-0 kubenswrapper[7484]: E0312 20:50:48.067120 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 20:50:48.067147 master-0 kubenswrapper[7484]: I0312 20:50:48.067139 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 20:50:48.067542 master-0 kubenswrapper[7484]: E0312 20:50:48.067523 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 20:50:48.067542 master-0 kubenswrapper[7484]: I0312 20:50:48.067540 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 20:50:48.067765 master-0 kubenswrapper[7484]: I0312 20:50:48.067746 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 12 20:50:48.067825 master-0 kubenswrapper[7484]: I0312 20:50:48.067778 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 12 20:50:48.074094 master-0 kubenswrapper[7484]: I0312 20:50:48.074031 7484 generic.go:334] "Generic (PLEG): container finished" podID="d6eace9f-a52d-4570-a932-959538e1f2bc" containerID="3f6a1c2c30754eda79aab1b24bbae4763c9876f50ed1598101e4f927c245331b" exitCode=0 Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.107137 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbgrl"] Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.108076 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66qvj" event={"ID":"d6eace9f-a52d-4570-a932-959538e1f2bc","Type":"ContainerDied","Data":"3f6a1c2c30754eda79aab1b24bbae4763c9876f50ed1598101e4f927c245331b"} Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.108119 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.108137 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66qvj" event={"ID":"d6eace9f-a52d-4570-a932-959538e1f2bc","Type":"ContainerStarted","Data":"898949022ca2ee68db161a1e164f2382a1563f2d65322832aa8c78dd1630a7b1"} Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.108152 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" event={"ID":"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f","Type":"ContainerStarted","Data":"5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e"} Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.108476 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.108989 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.109043 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"a35e2486-4d5e-43e5-89c0-c562002717bb","Type":"ContainerDied","Data":"ca135dffb90b35be61bb5a8b71e0d72551616de76459ae1d27cb43dd9577ced8"} Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.109103 7484 scope.go:117] "RemoveContainer" containerID="a6b8b068d61d9dd724915057535283b9904d114374ac0759be8070deebe9ff86" Mar 12 20:50:48.109661 master-0 kubenswrapper[7484]: I0312 20:50:48.109121 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.113827 master-0 kubenswrapper[7484]: I0312 20:50:48.112392 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-v7qw9" Mar 12 20:50:48.205762 master-0 kubenswrapper[7484]: I0312 20:50:48.205692 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-var-lock\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.205762 master-0 kubenswrapper[7484]: I0312 20:50:48.205764 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.205992 master-0 kubenswrapper[7484]: I0312 20:50:48.205839 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.308204 master-0 kubenswrapper[7484]: I0312 20:50:48.308055 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.308204 master-0 kubenswrapper[7484]: I0312 20:50:48.308153 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.308204 master-0 kubenswrapper[7484]: I0312 20:50:48.308177 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-catalog-content\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.308204 master-0 kubenswrapper[7484]: I0312 20:50:48.308207 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308235 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308274 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308300 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-utilities\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308325 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-var-lock\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308371 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308392 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308431 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.308470 master-0 kubenswrapper[7484]: I0312 20:50:48.308457 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.309634 master-0 kubenswrapper[7484]: I0312 20:50:48.309591 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.310022 master-0 kubenswrapper[7484]: I0312 20:50:48.309950 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-var-lock\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:50:48.409916 master-0 kubenswrapper[7484]: I0312 20:50:48.409799 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.409916 master-0 kubenswrapper[7484]: I0312 20:50:48.409904 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-catalog-content\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.410169 master-0 kubenswrapper[7484]: I0312 20:50:48.410068 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.410330 master-0 kubenswrapper[7484]: I0312 20:50:48.410277 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410374 master-0 kubenswrapper[7484]: I0312 20:50:48.410353 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-catalog-content\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.410374 master-0 kubenswrapper[7484]: I0312 20:50:48.410360 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410433 master-0 kubenswrapper[7484]: I0312 20:50:48.410356 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410433 master-0 kubenswrapper[7484]: I0312 20:50:48.410393 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410433 master-0 kubenswrapper[7484]: I0312 20:50:48.410412 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-utilities\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.410516 master-0 kubenswrapper[7484]: I0312 20:50:48.410467 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410547 master-0 kubenswrapper[7484]: I0312 20:50:48.410533 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410578 master-0 kubenswrapper[7484]: I0312 20:50:48.410561 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410578 master-0 kubenswrapper[7484]: I0312 20:50:48.410566 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410633 master-0 kubenswrapper[7484]: I0312 20:50:48.410571 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410683 master-0 kubenswrapper[7484]: I0312 20:50:48.410595 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410683 master-0 kubenswrapper[7484]: I0312 20:50:48.410615 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 12 20:50:48.410790 master-0 kubenswrapper[7484]: I0312 20:50:48.410768 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-utilities\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:50:48.433577 master-0 kubenswrapper[7484]: I0312 20:50:48.433480 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbgrl"] Mar 12 20:50:49.134428 master-0 kubenswrapper[7484]: I0312 20:50:49.134358 7484 generic.go:334] "Generic (PLEG): container finished" podID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerID="53a1a855e95809da5db41ddc57b03bad15e98992f9948ca3ac283e20c3052783" exitCode=0 Mar 12 20:50:49.135094 master-0 kubenswrapper[7484]: I0312 20:50:49.134438 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4d69687f-b8a5-4643-8268-ce30df5db3bc","Type":"ContainerDied","Data":"53a1a855e95809da5db41ddc57b03bad15e98992f9948ca3ac283e20c3052783"} Mar 12 20:50:50.446051 master-0 kubenswrapper[7484]: I0312 20:50:50.445989 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:50.642523 master-0 kubenswrapper[7484]: I0312 20:50:50.642450 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d69687f-b8a5-4643-8268-ce30df5db3bc" (UID: "4d69687f-b8a5-4643-8268-ce30df5db3bc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:50.643250 master-0 kubenswrapper[7484]: I0312 20:50:50.643180 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-kubelet-dir\") pod \"4d69687f-b8a5-4643-8268-ce30df5db3bc\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " Mar 12 20:50:50.643330 master-0 kubenswrapper[7484]: I0312 20:50:50.643308 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-var-lock\") pod \"4d69687f-b8a5-4643-8268-ce30df5db3bc\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " Mar 12 20:50:50.643572 master-0 kubenswrapper[7484]: I0312 20:50:50.643430 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d69687f-b8a5-4643-8268-ce30df5db3bc-kube-api-access\") pod \"4d69687f-b8a5-4643-8268-ce30df5db3bc\" (UID: \"4d69687f-b8a5-4643-8268-ce30df5db3bc\") " Mar 12 20:50:50.643757 master-0 kubenswrapper[7484]: I0312 20:50:50.643713 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:50.644457 master-0 kubenswrapper[7484]: I0312 20:50:50.644415 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-var-lock" (OuterVolumeSpecName: "var-lock") pod "4d69687f-b8a5-4643-8268-ce30df5db3bc" (UID: "4d69687f-b8a5-4643-8268-ce30df5db3bc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:50:50.652012 master-0 kubenswrapper[7484]: I0312 20:50:50.651865 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d69687f-b8a5-4643-8268-ce30df5db3bc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d69687f-b8a5-4643-8268-ce30df5db3bc" (UID: "4d69687f-b8a5-4643-8268-ce30df5db3bc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:50:50.744457 master-0 kubenswrapper[7484]: I0312 20:50:50.744417 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4d69687f-b8a5-4643-8268-ce30df5db3bc-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:50.744457 master-0 kubenswrapper[7484]: I0312 20:50:50.744448 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d69687f-b8a5-4643-8268-ce30df5db3bc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:50:51.153906 master-0 kubenswrapper[7484]: I0312 20:50:51.153854 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" event={"ID":"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f","Type":"ContainerStarted","Data":"5dd1e415f7dea320798ed071f084a01d7f961a59cb235657d89f90c5a715804d"} Mar 12 20:50:51.158435 master-0 kubenswrapper[7484]: I0312 20:50:51.158410 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"4d69687f-b8a5-4643-8268-ce30df5db3bc","Type":"ContainerDied","Data":"052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923"} Mar 12 20:50:51.158513 master-0 kubenswrapper[7484]: I0312 20:50:51.158438 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923" Mar 12 20:50:51.158513 master-0 kubenswrapper[7484]: I0312 20:50:51.158486 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 20:50:58.202524 master-0 kubenswrapper[7484]: I0312 20:50:58.202449 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/0.log" Mar 12 20:50:58.203376 master-0 kubenswrapper[7484]: I0312 20:50:58.202558 7484 generic.go:334] "Generic (PLEG): container finished" podID="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" containerID="0baf639c5d46bafa134b35ec6bda1e04194915bf6f2fc74defffc294b859ab5d" exitCode=1 Mar 12 20:50:58.203376 master-0 kubenswrapper[7484]: I0312 20:50:58.202623 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerDied","Data":"0baf639c5d46bafa134b35ec6bda1e04194915bf6f2fc74defffc294b859ab5d"} Mar 12 20:50:58.203376 master-0 kubenswrapper[7484]: I0312 20:50:58.203336 7484 scope.go:117] "RemoveContainer" containerID="0baf639c5d46bafa134b35ec6bda1e04194915bf6f2fc74defffc294b859ab5d" Mar 12 20:51:00.069606 master-0 kubenswrapper[7484]: E0312 20:51:00.069469 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:01.167514 master-0 kubenswrapper[7484]: E0312 20:51:01.167449 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:51:01.168009 master-0 kubenswrapper[7484]: I0312 20:51:01.167961 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 20:51:01.232232 master-0 kubenswrapper[7484]: I0312 20:51:01.232150 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"83e3c3f540d7460d25bb5d69b0cd1410029dc4e1ee6c57a3a2e2e14876dbf78a"} Mar 12 20:51:01.235040 master-0 kubenswrapper[7484]: I0312 20:51:01.234986 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66qvj" event={"ID":"d6eace9f-a52d-4570-a932-959538e1f2bc","Type":"ContainerStarted","Data":"37559cb1fc26e8f71d249fd47dc58f59a02dee845bd19ab0e20cc4ad87f91c1a"} Mar 12 20:51:01.239402 master-0 kubenswrapper[7484]: I0312 20:51:01.239307 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/0.log" Mar 12 20:51:01.239501 master-0 kubenswrapper[7484]: I0312 20:51:01.239453 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerStarted","Data":"1726ad62deed5adf886b68145fe6223edb7fe9f83fb593561c0b8bdb5aef13cf"} Mar 12 20:51:01.242125 master-0 kubenswrapper[7484]: I0312 20:51:01.242067 7484 generic.go:334] "Generic (PLEG): container finished" podID="567a9a33-1a82-4c48-b541-7e0eaae11f57" containerID="ef4905400a7b4f3b7293612d78dd05ee07faf771c60f7ce597f959bf755256e4" exitCode=0 Mar 12 20:51:01.242174 master-0 kubenswrapper[7484]: I0312 20:51:01.242129 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jblsg" event={"ID":"567a9a33-1a82-4c48-b541-7e0eaae11f57","Type":"ContainerDied","Data":"ef4905400a7b4f3b7293612d78dd05ee07faf771c60f7ce597f959bf755256e4"} Mar 12 20:51:02.248878 master-0 kubenswrapper[7484]: I0312 20:51:02.248763 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"23a10404655a12ee18bb39608a6172dc4a604cc5b8d5ad95a794929465208396"} Mar 12 20:51:02.251188 master-0 kubenswrapper[7484]: I0312 20:51:02.251147 7484 generic.go:334] "Generic (PLEG): container finished" podID="d6eace9f-a52d-4570-a932-959538e1f2bc" containerID="37559cb1fc26e8f71d249fd47dc58f59a02dee845bd19ab0e20cc4ad87f91c1a" exitCode=0 Mar 12 20:51:02.251356 master-0 kubenswrapper[7484]: I0312 20:51:02.251220 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66qvj" event={"ID":"d6eace9f-a52d-4570-a932-959538e1f2bc","Type":"ContainerDied","Data":"37559cb1fc26e8f71d249fd47dc58f59a02dee845bd19ab0e20cc4ad87f91c1a"} Mar 12 20:51:02.334999 master-0 kubenswrapper[7484]: I0312 20:51:02.334840 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 20:51:04.168943 master-0 kubenswrapper[7484]: I0312 20:51:04.168874 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_5bec49ae-0c52-451f-8d8d-6e822cd335cc/installer/0.log" Mar 12 20:51:04.168943 master-0 kubenswrapper[7484]: I0312 20:51:04.168944 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:51:04.263569 master-0 kubenswrapper[7484]: I0312 20:51:04.263498 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_5bec49ae-0c52-451f-8d8d-6e822cd335cc/installer/0.log" Mar 12 20:51:04.263569 master-0 kubenswrapper[7484]: I0312 20:51:04.263548 7484 generic.go:334] "Generic (PLEG): container finished" podID="5bec49ae-0c52-451f-8d8d-6e822cd335cc" containerID="2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151" exitCode=1 Mar 12 20:51:04.263937 master-0 kubenswrapper[7484]: I0312 20:51:04.263612 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5bec49ae-0c52-451f-8d8d-6e822cd335cc","Type":"ContainerDied","Data":"2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151"} Mar 12 20:51:04.263937 master-0 kubenswrapper[7484]: I0312 20:51:04.263624 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 12 20:51:04.263937 master-0 kubenswrapper[7484]: I0312 20:51:04.263646 7484 scope.go:117] "RemoveContainer" containerID="2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151" Mar 12 20:51:04.263937 master-0 kubenswrapper[7484]: I0312 20:51:04.263635 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5bec49ae-0c52-451f-8d8d-6e822cd335cc","Type":"ContainerDied","Data":"98878f1c22a55e47341e985f394158eb059ac971b614446c313279ea87ff3ce0"} Mar 12 20:51:04.266448 master-0 kubenswrapper[7484]: I0312 20:51:04.266415 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf" exitCode=1 Mar 12 20:51:04.266585 master-0 kubenswrapper[7484]: I0312 20:51:04.266459 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf"} Mar 12 20:51:04.267125 master-0 kubenswrapper[7484]: I0312 20:51:04.266745 7484 scope.go:117] "RemoveContainer" containerID="803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf" Mar 12 20:51:04.268866 master-0 kubenswrapper[7484]: I0312 20:51:04.268833 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="23a10404655a12ee18bb39608a6172dc4a604cc5b8d5ad95a794929465208396" exitCode=0 Mar 12 20:51:04.268866 master-0 kubenswrapper[7484]: I0312 20:51:04.268864 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"23a10404655a12ee18bb39608a6172dc4a604cc5b8d5ad95a794929465208396"} Mar 12 20:51:04.280882 master-0 kubenswrapper[7484]: I0312 20:51:04.280827 7484 scope.go:117] "RemoveContainer" containerID="2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151" Mar 12 20:51:04.281186 master-0 kubenswrapper[7484]: E0312 20:51:04.281147 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151\": container with ID starting with 2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151 not found: ID does not exist" containerID="2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151" Mar 12 20:51:04.281283 master-0 kubenswrapper[7484]: I0312 20:51:04.281187 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151"} err="failed to get container status \"2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151\": rpc error: code = NotFound desc = could not find container \"2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151\": container with ID starting with 2896c83cb0813f6e8f8445e2f7c57b60f7ca523d7afd776f565c9b3ac5269151 not found: ID does not exist" Mar 12 20:51:04.281283 master-0 kubenswrapper[7484]: I0312 20:51:04.281213 7484 scope.go:117] "RemoveContainer" containerID="75f2edc443b69729f543241a91ed5a8e5413482100b656bdfab3d5233a2312c3" Mar 12 20:51:04.326116 master-0 kubenswrapper[7484]: I0312 20:51:04.326041 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-var-lock\") pod \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " Mar 12 20:51:04.326116 master-0 kubenswrapper[7484]: I0312 20:51:04.326096 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kubelet-dir\") pod \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " Mar 12 20:51:04.326476 master-0 kubenswrapper[7484]: I0312 20:51:04.326155 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kube-api-access\") pod \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\" (UID: \"5bec49ae-0c52-451f-8d8d-6e822cd335cc\") " Mar 12 20:51:04.326476 master-0 kubenswrapper[7484]: I0312 20:51:04.326193 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "5bec49ae-0c52-451f-8d8d-6e822cd335cc" (UID: "5bec49ae-0c52-451f-8d8d-6e822cd335cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:51:04.326476 master-0 kubenswrapper[7484]: I0312 20:51:04.326235 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5bec49ae-0c52-451f-8d8d-6e822cd335cc" (UID: "5bec49ae-0c52-451f-8d8d-6e822cd335cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:51:04.326753 master-0 kubenswrapper[7484]: I0312 20:51:04.326707 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:04.326753 master-0 kubenswrapper[7484]: I0312 20:51:04.326737 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:04.329290 master-0 kubenswrapper[7484]: I0312 20:51:04.329229 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5bec49ae-0c52-451f-8d8d-6e822cd335cc" (UID: "5bec49ae-0c52-451f-8d8d-6e822cd335cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:51:04.428155 master-0 kubenswrapper[7484]: I0312 20:51:04.428064 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5bec49ae-0c52-451f-8d8d-6e822cd335cc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:06.280979 master-0 kubenswrapper[7484]: I0312 20:51:06.280933 7484 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="dc7d8b29ebb567785e771d22b9996a6a97141570cdafc6702bfef40b35ac45e8" exitCode=1 Mar 12 20:51:06.280979 master-0 kubenswrapper[7484]: I0312 20:51:06.280999 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"dc7d8b29ebb567785e771d22b9996a6a97141570cdafc6702bfef40b35ac45e8"} Mar 12 20:51:06.281649 master-0 kubenswrapper[7484]: I0312 20:51:06.281386 7484 scope.go:117] "RemoveContainer" containerID="dc7d8b29ebb567785e771d22b9996a6a97141570cdafc6702bfef40b35ac45e8" Mar 12 20:51:06.283330 master-0 kubenswrapper[7484]: I0312 20:51:06.283290 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94rll" event={"ID":"4c589179-0df4-4fe8-bfdd-965c3e7652c5","Type":"ContainerStarted","Data":"148dd2cec7b5be28f9e435862613834e20183aa464b3a40bf9588ed300d0ce75"} Mar 12 20:51:06.285700 master-0 kubenswrapper[7484]: I0312 20:51:06.285661 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02"} Mar 12 20:51:06.415889 master-0 kubenswrapper[7484]: I0312 20:51:06.415756 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:51:08.701471 master-0 kubenswrapper[7484]: I0312 20:51:08.701371 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:51:09.299796 master-0 kubenswrapper[7484]: I0312 20:51:09.299299 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"bb2ea5b36a5078a0f6bfe1f1daf8d78310cc27ab4b84afa4566e18c230d38fb8"} Mar 12 20:51:09.300682 master-0 kubenswrapper[7484]: I0312 20:51:09.300660 7484 generic.go:334] "Generic (PLEG): container finished" podID="4c589179-0df4-4fe8-bfdd-965c3e7652c5" containerID="148dd2cec7b5be28f9e435862613834e20183aa464b3a40bf9588ed300d0ce75" exitCode=0 Mar 12 20:51:09.300747 master-0 kubenswrapper[7484]: I0312 20:51:09.300706 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94rll" event={"ID":"4c589179-0df4-4fe8-bfdd-965c3e7652c5","Type":"ContainerDied","Data":"148dd2cec7b5be28f9e435862613834e20183aa464b3a40bf9588ed300d0ce75"} Mar 12 20:51:09.303255 master-0 kubenswrapper[7484]: I0312 20:51:09.303218 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jblsg" event={"ID":"567a9a33-1a82-4c48-b541-7e0eaae11f57","Type":"ContainerStarted","Data":"0c5534dbb42794b6f425e8c8e0ad9ac1591e379dae676e499b252da550cb2abc"} Mar 12 20:51:09.415744 master-0 kubenswrapper[7484]: I0312 20:51:09.415680 7484 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:10.070916 master-0 kubenswrapper[7484]: E0312 20:51:10.070726 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:10.314048 master-0 kubenswrapper[7484]: I0312 20:51:10.313942 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-94rll" event={"ID":"4c589179-0df4-4fe8-bfdd-965c3e7652c5","Type":"ContainerStarted","Data":"d84c7fcc70e52103a86fa98f28822f43e0ef1944ace24d8c50fc60bce687ea76"} Mar 12 20:51:11.322784 master-0 kubenswrapper[7484]: I0312 20:51:11.322706 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66qvj" event={"ID":"d6eace9f-a52d-4570-a932-959538e1f2bc","Type":"ContainerStarted","Data":"c69f649af0c6824f2d33c2fbd681a45f59a995e1a2e538cb8c6702ef65afbbd4"} Mar 12 20:51:13.633726 master-0 kubenswrapper[7484]: I0312 20:51:13.633637 7484 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-xh6r9 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Mar 12 20:51:13.634641 master-0 kubenswrapper[7484]: I0312 20:51:13.633757 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" podUID="5471994f-769e-4124-b7d0-01f5358fc18f" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Mar 12 20:51:15.168689 master-0 kubenswrapper[7484]: I0312 20:51:15.168611 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:51:15.168689 master-0 kubenswrapper[7484]: I0312 20:51:15.168686 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:51:15.209235 master-0 kubenswrapper[7484]: I0312 20:51:15.209176 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:51:15.258753 master-0 kubenswrapper[7484]: E0312 20:51:15.258711 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:51:15.341007 master-0 kubenswrapper[7484]: I0312 20:51:15.340939 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:51:15.341007 master-0 kubenswrapper[7484]: I0312 20:51:15.340995 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:51:15.350762 master-0 kubenswrapper[7484]: I0312 20:51:15.350681 7484 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63" exitCode=0 Mar 12 20:51:15.406592 master-0 kubenswrapper[7484]: I0312 20:51:15.406532 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:51:15.414746 master-0 kubenswrapper[7484]: I0312 20:51:15.414648 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jblsg" Mar 12 20:51:16.422544 master-0 kubenswrapper[7484]: I0312 20:51:16.422471 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-94rll" Mar 12 20:51:16.750886 master-0 kubenswrapper[7484]: I0312 20:51:16.750686 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:51:16.751327 master-0 kubenswrapper[7484]: I0312 20:51:16.751299 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:51:16.800835 master-0 kubenswrapper[7484]: I0312 20:51:16.800734 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:51:17.430707 master-0 kubenswrapper[7484]: I0312 20:51:17.430655 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 20:51:18.204455 master-0 kubenswrapper[7484]: I0312 20:51:18.204409 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 12 20:51:18.204863 master-0 kubenswrapper[7484]: I0312 20:51:18.204792 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:51:18.226217 master-0 kubenswrapper[7484]: I0312 20:51:18.226182 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 12 20:51:18.226644 master-0 kubenswrapper[7484]: I0312 20:51:18.226347 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:51:18.226792 master-0 kubenswrapper[7484]: I0312 20:51:18.226717 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:51:18.226977 master-0 kubenswrapper[7484]: I0312 20:51:18.226947 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 12 20:51:18.227509 master-0 kubenswrapper[7484]: I0312 20:51:18.227477 7484 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:18.227646 master-0 kubenswrapper[7484]: I0312 20:51:18.227625 7484 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:18.373132 master-0 kubenswrapper[7484]: I0312 20:51:18.372770 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 12 20:51:18.373132 master-0 kubenswrapper[7484]: I0312 20:51:18.372919 7484 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272" exitCode=137 Mar 12 20:51:18.373132 master-0 kubenswrapper[7484]: I0312 20:51:18.373055 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:51:18.373132 master-0 kubenswrapper[7484]: I0312 20:51:18.373076 7484 scope.go:117] "RemoveContainer" containerID="32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63" Mar 12 20:51:18.375648 master-0 kubenswrapper[7484]: I0312 20:51:18.375593 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_869e3d2a-1b5c-426f-945a-ddd44a9a5033/installer/0.log" Mar 12 20:51:18.375776 master-0 kubenswrapper[7484]: I0312 20:51:18.375652 7484 generic.go:334] "Generic (PLEG): container finished" podID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerID="36bfe1f3ee1124371de60181a0f2b9f61930c3b4af0a3a9413b95d937717a871" exitCode=1 Mar 12 20:51:18.375947 master-0 kubenswrapper[7484]: I0312 20:51:18.375802 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"869e3d2a-1b5c-426f-945a-ddd44a9a5033","Type":"ContainerDied","Data":"36bfe1f3ee1124371de60181a0f2b9f61930c3b4af0a3a9413b95d937717a871"} Mar 12 20:51:18.391931 master-0 kubenswrapper[7484]: I0312 20:51:18.391884 7484 scope.go:117] "RemoveContainer" containerID="2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272" Mar 12 20:51:18.411350 master-0 kubenswrapper[7484]: I0312 20:51:18.411309 7484 scope.go:117] "RemoveContainer" containerID="32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63" Mar 12 20:51:18.411970 master-0 kubenswrapper[7484]: E0312 20:51:18.411907 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63\": container with ID starting with 32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63 not found: ID does not exist" containerID="32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63" Mar 12 20:51:18.412071 master-0 kubenswrapper[7484]: I0312 20:51:18.411971 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63"} err="failed to get container status \"32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63\": rpc error: code = NotFound desc = could not find container \"32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63\": container with ID starting with 32b57ce4e66fc70ca937a57ebca0915b26069ef8bb25e1ae1b25bda655e0ef63 not found: ID does not exist" Mar 12 20:51:18.412071 master-0 kubenswrapper[7484]: I0312 20:51:18.412004 7484 scope.go:117] "RemoveContainer" containerID="2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272" Mar 12 20:51:18.412494 master-0 kubenswrapper[7484]: E0312 20:51:18.412449 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272\": container with ID starting with 2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272 not found: ID does not exist" containerID="2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272" Mar 12 20:51:18.412494 master-0 kubenswrapper[7484]: I0312 20:51:18.412478 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272"} err="failed to get container status \"2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272\": rpc error: code = NotFound desc = could not find container \"2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272\": container with ID starting with 2345e4b4a496bb5d1af4b4d3dcfdac80e0d3cab03968a70bb1a28a27cbc4f272 not found: ID does not exist" Mar 12 20:51:19.415392 master-0 kubenswrapper[7484]: I0312 20:51:19.415272 7484 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:19.745169 master-0 kubenswrapper[7484]: I0312 20:51:19.745081 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 12 20:51:19.745997 master-0 kubenswrapper[7484]: I0312 20:51:19.745946 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:51:19.752551 master-0 kubenswrapper[7484]: I0312 20:51:19.752498 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_869e3d2a-1b5c-426f-945a-ddd44a9a5033/installer/0.log" Mar 12 20:51:19.752747 master-0 kubenswrapper[7484]: I0312 20:51:19.752647 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:51:19.849834 master-0 kubenswrapper[7484]: I0312 20:51:19.849737 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-var-lock\") pod \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " Mar 12 20:51:19.850309 master-0 kubenswrapper[7484]: I0312 20:51:19.849970 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-var-lock" (OuterVolumeSpecName: "var-lock") pod "869e3d2a-1b5c-426f-945a-ddd44a9a5033" (UID: "869e3d2a-1b5c-426f-945a-ddd44a9a5033"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:51:19.850309 master-0 kubenswrapper[7484]: I0312 20:51:19.850043 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kube-api-access\") pod \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " Mar 12 20:51:19.850309 master-0 kubenswrapper[7484]: I0312 20:51:19.850087 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kubelet-dir\") pod \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\" (UID: \"869e3d2a-1b5c-426f-945a-ddd44a9a5033\") " Mar 12 20:51:19.850531 master-0 kubenswrapper[7484]: I0312 20:51:19.850367 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "869e3d2a-1b5c-426f-945a-ddd44a9a5033" (UID: "869e3d2a-1b5c-426f-945a-ddd44a9a5033"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:51:19.851839 master-0 kubenswrapper[7484]: I0312 20:51:19.851768 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:19.851915 master-0 kubenswrapper[7484]: I0312 20:51:19.851845 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:19.854923 master-0 kubenswrapper[7484]: I0312 20:51:19.854855 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "869e3d2a-1b5c-426f-945a-ddd44a9a5033" (UID: "869e3d2a-1b5c-426f-945a-ddd44a9a5033"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:51:19.953691 master-0 kubenswrapper[7484]: I0312 20:51:19.953574 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/869e3d2a-1b5c-426f-945a-ddd44a9a5033-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:51:20.071753 master-0 kubenswrapper[7484]: E0312 20:51:20.071645 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:20.405518 master-0 kubenswrapper[7484]: I0312 20:51:20.405337 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_869e3d2a-1b5c-426f-945a-ddd44a9a5033/installer/0.log" Mar 12 20:51:20.405518 master-0 kubenswrapper[7484]: I0312 20:51:20.405461 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 20:51:20.660390 master-0 kubenswrapper[7484]: E0312 20:51:20.659896 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:51:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:51:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:51:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:51:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:21.120028 master-0 kubenswrapper[7484]: I0312 20:51:21.119928 7484 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-9j7rx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 12 20:51:21.120356 master-0 kubenswrapper[7484]: I0312 20:51:21.120050 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" podUID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 12 20:51:22.312665 master-0 kubenswrapper[7484]: E0312 20:51:22.312597 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:22.313216 master-0 kubenswrapper[7484]: E0312 20:51:22.312723 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:51:22.812694331 +0000 UTC m=+95.297963173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:22.413700 master-0 kubenswrapper[7484]: E0312 20:51:22.413587 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:22.413700 master-0 kubenswrapper[7484]: E0312 20:51:22.413700 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:51:22.913673147 +0000 UTC m=+95.398941979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:22.432012 master-0 kubenswrapper[7484]: E0312 20:51:22.431851 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c333326630c18 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:50:48.05750684 +0000 UTC m=+60.542775642,LastTimestamp:2026-03-12 20:50:48.05750684 +0000 UTC m=+60.542775642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:51:22.889641 master-0 kubenswrapper[7484]: I0312 20:51:22.889551 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:51:22.992242 master-0 kubenswrapper[7484]: I0312 20:51:22.992136 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:51:28.360663 master-0 kubenswrapper[7484]: E0312 20:51:28.360556 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:51:29.415615 master-0 kubenswrapper[7484]: I0312 20:51:29.415530 7484 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:29.470951 master-0 kubenswrapper[7484]: I0312 20:51:29.470862 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="48a904da460444c368cf9e0843bf61f533eb8193bac37e0aa7187d1bff30096d" exitCode=0 Mar 12 20:51:30.072297 master-0 kubenswrapper[7484]: E0312 20:51:30.072211 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:30.661491 master-0 kubenswrapper[7484]: E0312 20:51:30.661329 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:31.119847 master-0 kubenswrapper[7484]: I0312 20:51:31.119754 7484 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-9j7rx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 12 20:51:31.120196 master-0 kubenswrapper[7484]: I0312 20:51:31.119871 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" podUID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 12 20:51:34.508898 master-0 kubenswrapper[7484]: I0312 20:51:34.508636 7484 generic.go:334] "Generic (PLEG): container finished" podID="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" containerID="a33a2903577092cf3a1f9c908ef309b6542edd2a9918f17c9c5bfb3802991a1e" exitCode=0 Mar 12 20:51:34.511578 master-0 kubenswrapper[7484]: I0312 20:51:34.511543 7484 generic.go:334] "Generic (PLEG): container finished" podID="15ebfbd8-0782-431a-88a3-83af328498d2" containerID="2e532f48874103782c7daee8f162358860ddd2173af37648f345faae82db17a2" exitCode=0 Mar 12 20:51:34.514398 master-0 kubenswrapper[7484]: I0312 20:51:34.514349 7484 generic.go:334] "Generic (PLEG): container finished" podID="07542516-49c8-4e20-9b97-798fbff850a5" containerID="31932c207919d9fa7ba649bcc3b67b43788d2b23969a14459b9233c510ac6567" exitCode=0 Mar 12 20:51:39.550851 master-0 kubenswrapper[7484]: I0312 20:51:39.550678 7484 generic.go:334] "Generic (PLEG): container finished" podID="5471994f-769e-4124-b7d0-01f5358fc18f" containerID="7ca674391c532a062d85de3aad380be9933e23e79819377498f98ef87ee56f1c" exitCode=0 Mar 12 20:51:39.784615 master-0 kubenswrapper[7484]: E0312 20:51:39.784500 7484 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod426efd5c_69e1_43e5_835a_6e1c4ef85720.slice/crio-conmon-28c691afcb8a45cb348e1216142781244b93a45eaf7cbab2716a18bf342b0dc8.scope\": RecentStats: unable to find data in memory cache]" Mar 12 20:51:40.074295 master-0 kubenswrapper[7484]: E0312 20:51:40.073945 7484 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 12 20:51:40.074295 master-0 kubenswrapper[7484]: I0312 20:51:40.073999 7484 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 20:51:40.559732 master-0 kubenswrapper[7484]: I0312 20:51:40.559565 7484 generic.go:334] "Generic (PLEG): container finished" podID="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" containerID="e0a2c06e46bef70f1a83d73f16311ff0724aeeddd6bc3dab0e6a4952ddc0acb3" exitCode=0 Mar 12 20:51:40.563404 master-0 kubenswrapper[7484]: I0312 20:51:40.563361 7484 generic.go:334] "Generic (PLEG): container finished" podID="96bd86df-2101-47f5-844b-1332261c66f1" containerID="e6ccd74a2af6fdce722a0e3dca22b3f124868515fcf641e0b36f66e322f8d4c3" exitCode=0 Mar 12 20:51:40.566389 master-0 kubenswrapper[7484]: I0312 20:51:40.566360 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/0.log" Mar 12 20:51:40.567213 master-0 kubenswrapper[7484]: I0312 20:51:40.567173 7484 generic.go:334] "Generic (PLEG): container finished" podID="426efd5c-69e1-43e5-835a-6e1c4ef85720" containerID="28c691afcb8a45cb348e1216142781244b93a45eaf7cbab2716a18bf342b0dc8" exitCode=1 Mar 12 20:51:40.661954 master-0 kubenswrapper[7484]: E0312 20:51:40.661845 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:41.119498 master-0 kubenswrapper[7484]: I0312 20:51:41.119377 7484 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-9j7rx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 12 20:51:41.119779 master-0 kubenswrapper[7484]: I0312 20:51:41.119499 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" podUID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 12 20:51:41.576461 master-0 kubenswrapper[7484]: I0312 20:51:41.576366 7484 generic.go:334] "Generic (PLEG): container finished" podID="784599a3-a2ac-46ac-a4b7-9439704646cc" containerID="ab706de1955bf19700e84d8f799385030b60c4a92c4860f12c06db2b3816fd99" exitCode=0 Mar 12 20:51:48.430779 master-0 kubenswrapper[7484]: I0312 20:51:48.430674 7484 status_manager.go:851] "Failed to get status for pod" podUID="d6eace9f-a52d-4570-a932-959538e1f2bc" pod="openshift-marketplace/redhat-marketplace-66qvj" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-66qvj)" Mar 12 20:51:49.629139 master-0 kubenswrapper[7484]: I0312 20:51:49.629014 7484 generic.go:334] "Generic (PLEG): container finished" podID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerID="4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6" exitCode=0 Mar 12 20:51:50.074499 master-0 kubenswrapper[7484]: E0312 20:51:50.074371 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 12 20:51:50.638852 master-0 kubenswrapper[7484]: I0312 20:51:50.638730 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-62t2f_fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/network-operator/0.log" Mar 12 20:51:50.639692 master-0 kubenswrapper[7484]: I0312 20:51:50.638858 7484 generic.go:334] "Generic (PLEG): container finished" podID="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" containerID="d9fa8a123cfb8c14404c75a08b2365da17bc3d4b0cf2e193ac612689b8a4fc37" exitCode=255 Mar 12 20:51:50.663247 master-0 kubenswrapper[7484]: E0312 20:51:50.663122 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:51:53.749646 master-0 kubenswrapper[7484]: E0312 20:51:53.749553 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: E0312 20:51:53.749860 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.749906 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750091 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"869e3d2a-1b5c-426f-945a-ddd44a9a5033","Type":"ContainerDied","Data":"57edb20a691b07071028f2edb064ac37f76c164057bb37d7d87a25a08a74d8a6"} Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750146 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57edb20a691b07071028f2edb064ac37f76c164057bb37d7d87a25a08a74d8a6" Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750182 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750450 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750543 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"48a904da460444c368cf9e0843bf61f533eb8193bac37e0aa7187d1bff30096d"} Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750591 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerDied","Data":"a33a2903577092cf3a1f9c908ef309b6542edd2a9918f17c9c5bfb3802991a1e"} Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750617 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" event={"ID":"15ebfbd8-0782-431a-88a3-83af328498d2","Type":"ContainerDied","Data":"2e532f48874103782c7daee8f162358860ddd2173af37648f345faae82db17a2"} Mar 12 20:51:53.750919 master-0 kubenswrapper[7484]: I0312 20:51:53.750640 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" event={"ID":"07542516-49c8-4e20-9b97-798fbff850a5","Type":"ContainerDied","Data":"31932c207919d9fa7ba649bcc3b67b43788d2b23969a14459b9233c510ac6567"} Mar 12 20:51:53.751526 master-0 kubenswrapper[7484]: I0312 20:51:53.751425 7484 scope.go:117] "RemoveContainer" containerID="31932c207919d9fa7ba649bcc3b67b43788d2b23969a14459b9233c510ac6567" Mar 12 20:51:53.754146 master-0 kubenswrapper[7484]: I0312 20:51:53.754089 7484 scope.go:117] "RemoveContainer" containerID="2e532f48874103782c7daee8f162358860ddd2173af37648f345faae82db17a2" Mar 12 20:51:53.755678 master-0 kubenswrapper[7484]: I0312 20:51:53.754734 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 12 20:51:53.755678 master-0 kubenswrapper[7484]: I0312 20:51:53.754883 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02" gracePeriod=30 Mar 12 20:51:53.756197 master-0 kubenswrapper[7484]: I0312 20:51:53.756069 7484 scope.go:117] "RemoveContainer" containerID="a33a2903577092cf3a1f9c908ef309b6542edd2a9918f17c9c5bfb3802991a1e" Mar 12 20:51:53.759048 master-0 kubenswrapper[7484]: I0312 20:51:53.757991 7484 scope.go:117] "RemoveContainer" containerID="4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6" Mar 12 20:51:53.767108 master-0 kubenswrapper[7484]: I0312 20:51:53.765457 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:51:54.683942 master-0 kubenswrapper[7484]: I0312 20:51:54.683855 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02" exitCode=2 Mar 12 20:51:56.435172 master-0 kubenswrapper[7484]: E0312 20:51:56.434948 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-66qvj.189c333327afc1c7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-66qvj,UID:d6eace9f-a52d-4570-a932-959538e1f2bc,APIVersion:v1,ResourceVersion:9033,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:50:48.079311303 +0000 UTC m=+60.564580105,LastTimestamp:2026-03-12 20:50:48.079311303 +0000 UTC m=+60.564580105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:51:56.893777 master-0 kubenswrapper[7484]: E0312 20:51:56.893676 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:56.894131 master-0 kubenswrapper[7484]: E0312 20:51:56.893836 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:51:57.893777876 +0000 UTC m=+130.379046718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:56.995750 master-0 kubenswrapper[7484]: E0312 20:51:56.995670 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:56.996041 master-0 kubenswrapper[7484]: E0312 20:51:56.995796 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:51:57.995767078 +0000 UTC m=+130.481035920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:51:57.959298 master-0 kubenswrapper[7484]: I0312 20:51:57.959062 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:51:58.060273 master-0 kubenswrapper[7484]: I0312 20:51:58.060183 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:52:00.275464 master-0 kubenswrapper[7484]: E0312 20:52:00.275338 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 12 20:52:00.664388 master-0 kubenswrapper[7484]: E0312 20:52:00.664016 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:52:00.664722 master-0 kubenswrapper[7484]: E0312 20:52:00.664696 7484 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 20:52:03.756252 master-0 kubenswrapper[7484]: I0312 20:52:03.756160 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_954fe7f9-e138-49ab-ab8e-504b75914100/installer/0.log" Mar 12 20:52:03.757134 master-0 kubenswrapper[7484]: I0312 20:52:03.756263 7484 generic.go:334] "Generic (PLEG): container finished" podID="954fe7f9-e138-49ab-ab8e-504b75914100" containerID="41e5296df7c3d4b1110f31058e02c84e5cd9852b203025b79d16be32d4b3de88" exitCode=1 Mar 12 20:52:06.767513 master-0 kubenswrapper[7484]: E0312 20:52:06.767312 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:52:07.786778 master-0 kubenswrapper[7484]: I0312 20:52:07.786704 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="d87061e77c3511fa3d10d439abd7fc19b87e09c759be9ed2d0d6d0851d1c2c5d" exitCode=0 Mar 12 20:52:10.677484 master-0 kubenswrapper[7484]: E0312 20:52:10.677331 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 12 20:52:13.632921 master-0 kubenswrapper[7484]: I0312 20:52:13.632760 7484 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-xh6r9 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Mar 12 20:52:13.633584 master-0 kubenswrapper[7484]: I0312 20:52:13.632914 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" podUID="5471994f-769e-4124-b7d0-01f5358fc18f" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Mar 12 20:52:16.849931 master-0 kubenswrapper[7484]: I0312 20:52:16.849800 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/0.log" Mar 12 20:52:16.850784 master-0 kubenswrapper[7484]: I0312 20:52:16.849925 7484 generic.go:334] "Generic (PLEG): container finished" podID="2b71f537-1cc2-4645-8e50-23941635457c" containerID="ae373579849ec0d4a33d66c2a3f6f43fccdff39968b29197dcdc4792d7cd63f3" exitCode=1 Mar 12 20:52:20.726098 master-0 kubenswrapper[7484]: E0312 20:52:20.725539 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:52:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:52:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:52:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:52:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:08bf2da4079dafb9d9fc0718c48ed509adab6b030e9c85e3bbd21d2702ab894e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:cf0470f46da209c10a63329feddb7afca3d04a9084fbf1a0755a3302e5c102ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221753567},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:52:21.479729 master-0 kubenswrapper[7484]: E0312 20:52:21.479520 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 12 20:52:27.768957 master-0 kubenswrapper[7484]: E0312 20:52:27.768802 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:52:27.770082 master-0 kubenswrapper[7484]: E0312 20:52:27.769079 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Mar 12 20:52:27.780967 master-0 kubenswrapper[7484]: I0312 20:52:27.780842 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:52:30.438106 master-0 kubenswrapper[7484]: E0312 20:52:30.437885 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-6686554ddc-xzwfp.189c3333a96d09b5 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-6686554ddc-xzwfp,UID:e03d34d0-f7c1-4dcf-8b84-89ad647cc10f,APIVersion:v1,ResourceVersion:9043,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\" in 3.091s (3.091s including waiting). Image size: 470680779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:50:50.255976885 +0000 UTC m=+62.741245687,LastTimestamp:2026-03-12 20:50:50.255976885 +0000 UTC m=+62.741245687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:52:30.727056 master-0 kubenswrapper[7484]: E0312 20:52:30.726785 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:52:31.962695 master-0 kubenswrapper[7484]: E0312 20:52:31.962584 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:52:31.963720 master-0 kubenswrapper[7484]: E0312 20:52:31.962720 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:52:33.962691197 +0000 UTC m=+166.447960029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:52:31.980059 master-0 kubenswrapper[7484]: I0312 20:52:31.979988 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/1.log" Mar 12 20:52:31.981515 master-0 kubenswrapper[7484]: I0312 20:52:31.981457 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/0.log" Mar 12 20:52:31.981656 master-0 kubenswrapper[7484]: I0312 20:52:31.981537 7484 generic.go:334] "Generic (PLEG): container finished" podID="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" containerID="1726ad62deed5adf886b68145fe6223edb7fe9f83fb593561c0b8bdb5aef13cf" exitCode=255 Mar 12 20:52:32.064113 master-0 kubenswrapper[7484]: E0312 20:52:32.064016 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:52:32.064391 master-0 kubenswrapper[7484]: E0312 20:52:32.064135 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:52:34.064107351 +0000 UTC m=+166.549376183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:52:33.080828 master-0 kubenswrapper[7484]: E0312 20:52:33.080657 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 12 20:52:33.990859 master-0 kubenswrapper[7484]: I0312 20:52:33.990681 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:52:34.091945 master-0 kubenswrapper[7484]: I0312 20:52:34.091787 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:52:36.015176 master-0 kubenswrapper[7484]: I0312 20:52:36.015093 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/0.log" Mar 12 20:52:36.016016 master-0 kubenswrapper[7484]: I0312 20:52:36.015188 7484 generic.go:334] "Generic (PLEG): container finished" podID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" containerID="e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9" exitCode=1 Mar 12 20:52:37.024745 master-0 kubenswrapper[7484]: I0312 20:52:37.024666 7484 generic.go:334] "Generic (PLEG): container finished" podID="e624e623-6d59-444d-b548-165fa5fd2581" containerID="2d7932f9200cfcc46a818b87f2e758dc323d7be1734436d6a1a8927b3aea1adf" exitCode=0 Mar 12 20:52:40.727338 master-0 kubenswrapper[7484]: E0312 20:52:40.727177 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:52:40.765678 master-0 kubenswrapper[7484]: I0312 20:52:40.765574 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:52:40.765678 master-0 kubenswrapper[7484]: I0312 20:52:40.765655 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:52:40.766022 master-0 kubenswrapper[7484]: I0312 20:52:40.765667 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:52:40.766022 master-0 kubenswrapper[7484]: I0312 20:52:40.765762 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:52:46.282328 master-0 kubenswrapper[7484]: E0312 20:52:46.282212 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 12 20:52:48.432744 master-0 kubenswrapper[7484]: I0312 20:52:48.432555 7484 status_manager.go:851] "Failed to get status for pod" podUID="367123ca-5a21-415c-8ac2-6d875696536b" pod="openshift-kube-controller-manager/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 12 20:52:50.728447 master-0 kubenswrapper[7484]: E0312 20:52:50.728331 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:52:50.766441 master-0 kubenswrapper[7484]: I0312 20:52:50.766299 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:52:50.766441 master-0 kubenswrapper[7484]: I0312 20:52:50.766385 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:52:50.766441 master-0 kubenswrapper[7484]: I0312 20:52:50.766414 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:52:50.766938 master-0 kubenswrapper[7484]: I0312 20:52:50.766464 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:52:51.104718 master-0 kubenswrapper[7484]: E0312 20:52:51.104622 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="367123ca-5a21-415c-8ac2-6d875696536b" Mar 12 20:52:51.136304 master-0 kubenswrapper[7484]: I0312 20:52:51.136205 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:52:51.163999 master-0 kubenswrapper[7484]: E0312 20:52:51.163875 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4rthf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/redhat-operators-lbgrl" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" Mar 12 20:52:52.141917 master-0 kubenswrapper[7484]: I0312 20:52:52.141771 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:52:55.163606 master-0 kubenswrapper[7484]: I0312 20:52:55.163497 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6" exitCode=1 Mar 12 20:52:57.178597 master-0 kubenswrapper[7484]: I0312 20:52:57.178539 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/0.log" Mar 12 20:52:57.178597 master-0 kubenswrapper[7484]: I0312 20:52:57.178584 7484 generic.go:334] "Generic (PLEG): container finished" podID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerID="60173c0f9984162f24ad65c25f3ae119353e5fb646ea28da5079828f5c237197" exitCode=1 Mar 12 20:52:57.180398 master-0 kubenswrapper[7484]: I0312 20:52:57.180366 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/0.log" Mar 12 20:52:57.180865 master-0 kubenswrapper[7484]: I0312 20:52:57.180822 7484 generic.go:334] "Generic (PLEG): container finished" podID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerID="5932e7f75755d53b1d311f0b9e66cf21d66d861e9615083a39ac924565528bfd" exitCode=1 Mar 12 20:53:00.728931 master-0 kubenswrapper[7484]: E0312 20:53:00.728851 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:53:00.728931 master-0 kubenswrapper[7484]: E0312 20:53:00.728924 7484 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 20:53:00.765946 master-0 kubenswrapper[7484]: I0312 20:53:00.765861 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:53:00.766169 master-0 kubenswrapper[7484]: I0312 20:53:00.765956 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:53:00.766169 master-0 kubenswrapper[7484]: I0312 20:53:00.766006 7484 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-hxqgw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" start-of-body= Mar 12 20:53:00.766310 master-0 kubenswrapper[7484]: I0312 20:53:00.766155 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" podUID="e624e623-6d59-444d-b548-165fa5fd2581" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.9:8080/healthz\": dial tcp 10.128.0.9:8080: connect: connection refused" Mar 12 20:53:01.784595 master-0 kubenswrapper[7484]: E0312 20:53:01.784472 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:53:01.785652 master-0 kubenswrapper[7484]: E0312 20:53:01.784696 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 12 20:53:01.785652 master-0 kubenswrapper[7484]: I0312 20:53:01.784771 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:53:01.785652 master-0 kubenswrapper[7484]: I0312 20:53:01.784851 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:53:01.785652 master-0 kubenswrapper[7484]: I0312 20:53:01.785562 7484 scope.go:117] "RemoveContainer" containerID="2d7932f9200cfcc46a818b87f2e758dc323d7be1734436d6a1a8927b3aea1adf" Mar 12 20:53:01.796934 master-0 kubenswrapper[7484]: I0312 20:53:01.796875 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:53:02.683685 master-0 kubenswrapper[7484]: E0312 20:53:02.683562 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:53:04.441333 master-0 kubenswrapper[7484]: E0312 20:53:04.441185 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-6686554ddc-xzwfp.189c3333b2365287 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-6686554ddc-xzwfp,UID:e03d34d0-f7c1-4dcf-8b84-89ad647cc10f,APIVersion:v1,ResourceVersion:9043,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Created,Message:Created container: control-plane-machine-set-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:50:50.403385991 +0000 UTC m=+62.888654783,LastTimestamp:2026-03-12 20:50:50.403385991 +0000 UTC m=+62.888654783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:53:06.580021 master-0 kubenswrapper[7484]: I0312 20:53:06.579901 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:06.581174 master-0 kubenswrapper[7484]: I0312 20:53:06.580057 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:06.643102 master-0 kubenswrapper[7484]: I0312 20:53:06.643015 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:06.643677 master-0 kubenswrapper[7484]: I0312 20:53:06.643622 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:07.995351 master-0 kubenswrapper[7484]: E0312 20:53:07.995280 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:07.996388 master-0 kubenswrapper[7484]: E0312 20:53:07.995444 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:53:11.995403334 +0000 UTC m=+204.480672176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:08.096709 master-0 kubenswrapper[7484]: E0312 20:53:08.096597 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:08.098067 master-0 kubenswrapper[7484]: E0312 20:53:08.098006 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:53:12.097960353 +0000 UTC m=+204.583229195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:12.055313 master-0 kubenswrapper[7484]: I0312 20:53:12.055095 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:53:12.157471 master-0 kubenswrapper[7484]: I0312 20:53:12.157368 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:53:13.633418 master-0 kubenswrapper[7484]: I0312 20:53:13.633334 7484 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-xh6r9 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" start-of-body= Mar 12 20:53:13.634455 master-0 kubenswrapper[7484]: I0312 20:53:13.633430 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" podUID="5471994f-769e-4124-b7d0-01f5358fc18f" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": dial tcp 10.128.0.16:8443: connect: connection refused" Mar 12 20:53:16.580483 master-0 kubenswrapper[7484]: I0312 20:53:16.580356 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:16.580483 master-0 kubenswrapper[7484]: I0312 20:53:16.580414 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:16.581444 master-0 kubenswrapper[7484]: I0312 20:53:16.580483 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:16.581444 master-0 kubenswrapper[7484]: I0312 20:53:16.580531 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:16.642908 master-0 kubenswrapper[7484]: I0312 20:53:16.642684 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:16.643383 master-0 kubenswrapper[7484]: I0312 20:53:16.642944 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:16.643383 master-0 kubenswrapper[7484]: I0312 20:53:16.642684 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:16.643383 master-0 kubenswrapper[7484]: I0312 20:53:16.643041 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:19.684760 master-0 kubenswrapper[7484]: E0312 20:53:19.684483 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 12 20:53:20.339646 master-0 kubenswrapper[7484]: I0312 20:53:20.339542 7484 generic.go:334] "Generic (PLEG): container finished" podID="d862a346-ec4d-46f6-a3e2-ea8759ea0111" containerID="36186e847a1c7ad015db1d456eab6f7fe52723f5ba9629a902598f1f75fcfbe7" exitCode=0 Mar 12 20:53:20.915876 master-0 kubenswrapper[7484]: E0312 20:53:20.915550 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:53:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:53:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:53:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:53:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:08bf2da4079dafb9d9fc0718c48ed509adab6b030e9c85e3bbd21d2702ab894e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:cf0470f46da209c10a63329feddb7afca3d04a9084fbf1a0755a3302e5c102ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221753567},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 20:53:22.354883 master-0 kubenswrapper[7484]: I0312 20:53:22.354760 7484 generic.go:334] "Generic (PLEG): container finished" podID="6d28f095-032b-47d4-b808-1502deeffee5" containerID="90f6df2cd5378a3ebab865fb719c69e38e48496ca3cd635c80da9e8ec49ce434" exitCode=0 Mar 12 20:53:23.604179 master-0 kubenswrapper[7484]: I0312 20:53:23.604105 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:23.604997 master-0 kubenswrapper[7484]: I0312 20:53:23.604198 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:23.604997 master-0 kubenswrapper[7484]: I0312 20:53:23.604944 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:23.605290 master-0 kubenswrapper[7484]: I0312 20:53:23.605038 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:25.377054 master-0 kubenswrapper[7484]: I0312 20:53:25.376928 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-qfbrj_07542516-49c8-4e20-9b97-798fbff850a5/kube-storage-version-migrator-operator/1.log" Mar 12 20:53:25.378218 master-0 kubenswrapper[7484]: I0312 20:53:25.377683 7484 generic.go:334] "Generic (PLEG): container finished" podID="07542516-49c8-4e20-9b97-798fbff850a5" containerID="ded70f8c305f91b4cd97482dbdf153ec9254b0cfdc370f5b14f5e7f5ee654d15" exitCode=255 Mar 12 20:53:25.381169 master-0 kubenswrapper[7484]: I0312 20:53:25.381117 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-f62j6_a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/service-ca-operator/1.log" Mar 12 20:53:25.381851 master-0 kubenswrapper[7484]: I0312 20:53:25.381756 7484 generic.go:334] "Generic (PLEG): container finished" podID="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" containerID="47c0e0d21aabebc91fcbee939e9b068c6a5287ab73aa0a38e830a0c4a7aa5051" exitCode=255 Mar 12 20:53:25.384249 master-0 kubenswrapper[7484]: I0312 20:53:25.384187 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-9j7rx_a3bebf49-1d92-4353-b84c-91ed86b7bb94/authentication-operator/1.log" Mar 12 20:53:25.384734 master-0 kubenswrapper[7484]: I0312 20:53:25.384667 7484 generic.go:334] "Generic (PLEG): container finished" podID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerID="65753e4931b3081b10e537c0401b4155fdbc512202e120631ec6b784c53ee11c" exitCode=255 Mar 12 20:53:25.386696 master-0 kubenswrapper[7484]: I0312 20:53:25.386641 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-jwthf_15ebfbd8-0782-431a-88a3-83af328498d2/openshift-apiserver-operator/1.log" Mar 12 20:53:25.387169 master-0 kubenswrapper[7484]: I0312 20:53:25.387108 7484 generic.go:334] "Generic (PLEG): container finished" podID="15ebfbd8-0782-431a-88a3-83af328498d2" containerID="ac220be40864e46bcbfeebc937d699a58348f8eb40ed949885e1f1fa2e71ed44" exitCode=255 Mar 12 20:53:26.580496 master-0 kubenswrapper[7484]: I0312 20:53:26.580378 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:26.580496 master-0 kubenswrapper[7484]: I0312 20:53:26.580483 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:26.643035 master-0 kubenswrapper[7484]: I0312 20:53:26.642919 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:26.643035 master-0 kubenswrapper[7484]: I0312 20:53:26.643025 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:30.917965 master-0 kubenswrapper[7484]: E0312 20:53:30.917841 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:53:33.604443 master-0 kubenswrapper[7484]: I0312 20:53:33.604374 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:33.605375 master-0 kubenswrapper[7484]: I0312 20:53:33.604451 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:33.605375 master-0 kubenswrapper[7484]: I0312 20:53:33.605040 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:33.605375 master-0 kubenswrapper[7484]: I0312 20:53:33.605070 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:35.800548 master-0 kubenswrapper[7484]: E0312 20:53:35.800423 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:53:35.801555 master-0 kubenswrapper[7484]: E0312 20:53:35.800681 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 12 20:53:35.812102 master-0 kubenswrapper[7484]: I0312 20:53:35.812005 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:53:36.580731 master-0 kubenswrapper[7484]: I0312 20:53:36.580640 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:36.580731 master-0 kubenswrapper[7484]: I0312 20:53:36.580720 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:36.581366 master-0 kubenswrapper[7484]: I0312 20:53:36.580752 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:36.581366 master-0 kubenswrapper[7484]: I0312 20:53:36.580802 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:36.642362 master-0 kubenswrapper[7484]: I0312 20:53:36.642263 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:36.642362 master-0 kubenswrapper[7484]: I0312 20:53:36.642325 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:36.642718 master-0 kubenswrapper[7484]: I0312 20:53:36.642448 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:36.642718 master-0 kubenswrapper[7484]: I0312 20:53:36.642529 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:36.686674 master-0 kubenswrapper[7484]: E0312 20:53:36.686217 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:53:38.445176 master-0 kubenswrapper[7484]: E0312 20:53:38.444902 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{control-plane-machine-set-operator-6686554ddc-xzwfp.189c3333b389da59 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:control-plane-machine-set-operator-6686554ddc-xzwfp,UID:e03d34d0-f7c1-4dcf-8b84-89ad647cc10f,APIVersion:v1,ResourceVersion:9043,FieldPath:spec.containers{control-plane-machine-set-operator},},Reason:Started,Message:Started container control-plane-machine-set-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:50:50.425637465 +0000 UTC m=+62.910906267,LastTimestamp:2026-03-12 20:50:50.425637465 +0000 UTC m=+62.910906267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:53:40.919191 master-0 kubenswrapper[7484]: E0312 20:53:40.919115 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:53:43.604049 master-0 kubenswrapper[7484]: I0312 20:53:43.603966 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:43.604650 master-0 kubenswrapper[7484]: I0312 20:53:43.604088 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:43.604859 master-0 kubenswrapper[7484]: I0312 20:53:43.604802 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:43.605036 master-0 kubenswrapper[7484]: I0312 20:53:43.604993 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:46.060214 master-0 kubenswrapper[7484]: E0312 20:53:46.060107 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:46.061412 master-0 kubenswrapper[7484]: E0312 20:53:46.061099 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:53:54.061039575 +0000 UTC m=+246.546308417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:46.161490 master-0 kubenswrapper[7484]: E0312 20:53:46.161423 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:46.161777 master-0 kubenswrapper[7484]: E0312 20:53:46.161531 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:53:54.161508437 +0000 UTC m=+246.646777239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:53:46.580449 master-0 kubenswrapper[7484]: I0312 20:53:46.580319 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:46.580888 master-0 kubenswrapper[7484]: I0312 20:53:46.580471 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:46.643029 master-0 kubenswrapper[7484]: I0312 20:53:46.642883 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:46.643372 master-0 kubenswrapper[7484]: I0312 20:53:46.643026 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:48.434780 master-0 kubenswrapper[7484]: I0312 20:53:48.434560 7484 status_manager.go:851] "Failed to get status for pod" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" pod="openshift-marketplace/redhat-operators-lbgrl" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-operators-lbgrl)" Mar 12 20:53:50.921844 master-0 kubenswrapper[7484]: E0312 20:53:50.921309 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:53:53.604862 master-0 kubenswrapper[7484]: I0312 20:53:53.604698 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:53:53.606009 master-0 kubenswrapper[7484]: I0312 20:53:53.604858 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:53:53.688321 master-0 kubenswrapper[7484]: E0312 20:53:53.687859 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:53:54.118260 master-0 kubenswrapper[7484]: I0312 20:53:54.118097 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:53:54.219237 master-0 kubenswrapper[7484]: I0312 20:53:54.219136 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:53:56.580379 master-0 kubenswrapper[7484]: I0312 20:53:56.580250 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:56.581217 master-0 kubenswrapper[7484]: I0312 20:53:56.580256 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:53:56.581217 master-0 kubenswrapper[7484]: I0312 20:53:56.580554 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:56.581217 master-0 kubenswrapper[7484]: I0312 20:53:56.580967 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:53:56.643172 master-0 kubenswrapper[7484]: I0312 20:53:56.643081 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:56.643456 master-0 kubenswrapper[7484]: I0312 20:53:56.643190 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:53:56.643456 master-0 kubenswrapper[7484]: I0312 20:53:56.643329 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:53:56.643561 master-0 kubenswrapper[7484]: I0312 20:53:56.643460 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:54:00.922322 master-0 kubenswrapper[7484]: E0312 20:54:00.922223 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:54:00.922322 master-0 kubenswrapper[7484]: E0312 20:54:00.922291 7484 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 20:54:03.604390 master-0 kubenswrapper[7484]: I0312 20:54:03.604309 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:54:03.604390 master-0 kubenswrapper[7484]: I0312 20:54:03.604382 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:54:06.580510 master-0 kubenswrapper[7484]: I0312 20:54:06.580419 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:54:06.581063 master-0 kubenswrapper[7484]: I0312 20:54:06.580524 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:54:06.642461 master-0 kubenswrapper[7484]: I0312 20:54:06.642389 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:54:06.642544 master-0 kubenswrapper[7484]: I0312 20:54:06.642502 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:54:09.826520 master-0 kubenswrapper[7484]: E0312 20:54:09.826428 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:54:09.827594 master-0 kubenswrapper[7484]: E0312 20:54:09.826697 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.026s" Mar 12 20:54:09.827594 master-0 kubenswrapper[7484]: I0312 20:54:09.826746 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" event={"ID":"5471994f-769e-4124-b7d0-01f5358fc18f","Type":"ContainerDied","Data":"7ca674391c532a062d85de3aad380be9933e23e79819377498f98ef87ee56f1c"} Mar 12 20:54:09.827594 master-0 kubenswrapper[7484]: I0312 20:54:09.827352 7484 scope.go:117] "RemoveContainer" containerID="7ca674391c532a062d85de3aad380be9933e23e79819377498f98ef87ee56f1c" Mar 12 20:54:09.847959 master-0 kubenswrapper[7484]: I0312 20:54:09.847878 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:54:10.689470 master-0 kubenswrapper[7484]: E0312 20:54:10.689315 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:54:12.448787 master-0 kubenswrapper[7484]: E0312 20:54:12.448518 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{openshift-controller-manager-operator-8565d84698-vp2hs.189c3335f502426d openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-8565d84698-vp2hs,UID:7623a5c6-47a9-4b75-bda8-c0a2d7c67272,APIVersion:v1,ResourceVersion:3793,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:51:00.113982061 +0000 UTC m=+72.599250903,LastTimestamp:2026-03-12 20:51:00.113982061 +0000 UTC m=+72.599250903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:54:13.605218 master-0 kubenswrapper[7484]: I0312 20:54:13.605056 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:54:13.605218 master-0 kubenswrapper[7484]: I0312 20:54:13.605169 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:54:16.579822 master-0 kubenswrapper[7484]: I0312 20:54:16.579757 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:54:16.580452 master-0 kubenswrapper[7484]: I0312 20:54:16.579869 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:54:16.642049 master-0 kubenswrapper[7484]: I0312 20:54:16.641987 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:54:16.642446 master-0 kubenswrapper[7484]: I0312 20:54:16.642405 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:54:21.259492 master-0 kubenswrapper[7484]: E0312 20:54:21.259211 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:54:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:54:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:54:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:54:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:08bf2da4079dafb9d9fc0718c48ed509adab6b030e9c85e3bbd21d2702ab894e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:cf0470f46da209c10a63329feddb7afca3d04a9084fbf1a0755a3302e5c102ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221753567},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:54:23.604343 master-0 kubenswrapper[7484]: I0312 20:54:23.604235 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:54:23.605437 master-0 kubenswrapper[7484]: I0312 20:54:23.604349 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:54:26.580528 master-0 kubenswrapper[7484]: I0312 20:54:26.580425 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:54:26.580528 master-0 kubenswrapper[7484]: I0312 20:54:26.580510 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:54:26.643214 master-0 kubenswrapper[7484]: I0312 20:54:26.643110 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:54:26.643510 master-0 kubenswrapper[7484]: I0312 20:54:26.643232 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:54:27.691179 master-0 kubenswrapper[7484]: E0312 20:54:27.691033 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:54:28.121539 master-0 kubenswrapper[7484]: E0312 20:54:28.121480 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:54:28.121539 master-0 kubenswrapper[7484]: E0312 20:54:28.121566 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:54:44.121547458 +0000 UTC m=+296.606816260 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:54:28.223224 master-0 kubenswrapper[7484]: E0312 20:54:28.223138 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:54:28.223683 master-0 kubenswrapper[7484]: E0312 20:54:28.223287 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:54:44.223255429 +0000 UTC m=+296.708524271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:54:31.260873 master-0 kubenswrapper[7484]: E0312 20:54:31.260749 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:54:33.604684 master-0 kubenswrapper[7484]: I0312 20:54:33.604578 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:54:33.604684 master-0 kubenswrapper[7484]: I0312 20:54:33.604641 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:54:36.580089 master-0 kubenswrapper[7484]: I0312 20:54:36.579942 7484 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-zgjqw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 12 20:54:36.580089 master-0 kubenswrapper[7484]: I0312 20:54:36.580053 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 12 20:54:36.642899 master-0 kubenswrapper[7484]: I0312 20:54:36.642761 7484 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-hdd4n container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 12 20:54:36.643044 master-0 kubenswrapper[7484]: I0312 20:54:36.642889 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 12 20:54:41.261452 master-0 kubenswrapper[7484]: E0312 20:54:41.261145 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:54:43.609463 master-0 kubenswrapper[7484]: I0312 20:54:43.609337 7484 patch_prober.go:28] interesting pod/controller-manager-6dfdd9fb89-wjn86 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 12 20:54:43.609463 master-0 kubenswrapper[7484]: I0312 20:54:43.609408 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 12 20:54:43.851853 master-0 kubenswrapper[7484]: E0312 20:54:43.851742 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 12 20:54:43.852224 master-0 kubenswrapper[7484]: E0312 20:54:43.852109 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.025s" Mar 12 20:54:43.854042 master-0 kubenswrapper[7484]: I0312 20:54:43.853989 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 20:54:43.854042 master-0 kubenswrapper[7484]: I0312 20:54:43.854029 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" event={"ID":"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c","Type":"ContainerDied","Data":"e0a2c06e46bef70f1a83d73f16311ff0724aeeddd6bc3dab0e6a4952ddc0acb3"} Mar 12 20:54:43.854790 master-0 kubenswrapper[7484]: I0312 20:54:43.854716 7484 scope.go:117] "RemoveContainer" containerID="90f6df2cd5378a3ebab865fb719c69e38e48496ca3cd635c80da9e8ec49ce434" Mar 12 20:54:43.855401 master-0 kubenswrapper[7484]: I0312 20:54:43.855334 7484 scope.go:117] "RemoveContainer" containerID="1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6" Mar 12 20:54:43.857021 master-0 kubenswrapper[7484]: I0312 20:54:43.856616 7484 scope.go:117] "RemoveContainer" containerID="47c0e0d21aabebc91fcbee939e9b068c6a5287ab73aa0a38e830a0c4a7aa5051" Mar 12 20:54:43.857021 master-0 kubenswrapper[7484]: I0312 20:54:43.856791 7484 scope.go:117] "RemoveContainer" containerID="28c691afcb8a45cb348e1216142781244b93a45eaf7cbab2716a18bf342b0dc8" Mar 12 20:54:43.857021 master-0 kubenswrapper[7484]: I0312 20:54:43.856989 7484 scope.go:117] "RemoveContainer" containerID="65753e4931b3081b10e537c0401b4155fdbc512202e120631ec6b784c53ee11c" Mar 12 20:54:43.857796 master-0 kubenswrapper[7484]: I0312 20:54:43.857732 7484 scope.go:117] "RemoveContainer" containerID="e0a2c06e46bef70f1a83d73f16311ff0724aeeddd6bc3dab0e6a4952ddc0acb3" Mar 12 20:54:43.859874 master-0 kubenswrapper[7484]: I0312 20:54:43.859555 7484 scope.go:117] "RemoveContainer" containerID="5932e7f75755d53b1d311f0b9e66cf21d66d861e9615083a39ac924565528bfd" Mar 12 20:54:43.859874 master-0 kubenswrapper[7484]: I0312 20:54:43.859835 7484 scope.go:117] "RemoveContainer" containerID="1726ad62deed5adf886b68145fe6223edb7fe9f83fb593561c0b8bdb5aef13cf" Mar 12 20:54:43.860573 master-0 kubenswrapper[7484]: I0312 20:54:43.860150 7484 scope.go:117] "RemoveContainer" containerID="e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9" Mar 12 20:54:43.861016 master-0 kubenswrapper[7484]: I0312 20:54:43.860594 7484 scope.go:117] "RemoveContainer" containerID="ab706de1955bf19700e84d8f799385030b60c4a92c4860f12c06db2b3816fd99" Mar 12 20:54:43.861245 master-0 kubenswrapper[7484]: I0312 20:54:43.861047 7484 scope.go:117] "RemoveContainer" containerID="36186e847a1c7ad015db1d456eab6f7fe52723f5ba9629a902598f1f75fcfbe7" Mar 12 20:54:43.861245 master-0 kubenswrapper[7484]: I0312 20:54:43.861156 7484 scope.go:117] "RemoveContainer" containerID="ded70f8c305f91b4cd97482dbdf153ec9254b0cfdc370f5b14f5e7f5ee654d15" Mar 12 20:54:43.861720 master-0 kubenswrapper[7484]: I0312 20:54:43.861644 7484 scope.go:117] "RemoveContainer" containerID="ae373579849ec0d4a33d66c2a3f6f43fccdff39968b29197dcdc4792d7cd63f3" Mar 12 20:54:43.862023 master-0 kubenswrapper[7484]: I0312 20:54:43.861971 7484 scope.go:117] "RemoveContainer" containerID="60173c0f9984162f24ad65c25f3ae119353e5fb646ea28da5079828f5c237197" Mar 12 20:54:43.862269 master-0 kubenswrapper[7484]: I0312 20:54:43.862199 7484 scope.go:117] "RemoveContainer" containerID="d9fa8a123cfb8c14404c75a08b2365da17bc3d4b0cf2e193ac612689b8a4fc37" Mar 12 20:54:43.863466 master-0 kubenswrapper[7484]: I0312 20:54:43.863279 7484 scope.go:117] "RemoveContainer" containerID="e6ccd74a2af6fdce722a0e3dca22b3f124868515fcf641e0b36f66e322f8d4c3" Mar 12 20:54:43.864152 master-0 kubenswrapper[7484]: I0312 20:54:43.864103 7484 scope.go:117] "RemoveContainer" containerID="ac220be40864e46bcbfeebc937d699a58348f8eb40ed949885e1f1fa2e71ed44" Mar 12 20:54:43.874181 master-0 kubenswrapper[7484]: I0312 20:54:43.867764 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:54:44.142699 master-0 kubenswrapper[7484]: I0312 20:54:44.142632 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:54:44.243586 master-0 kubenswrapper[7484]: I0312 20:54:44.243549 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:54:44.477399 master-0 kubenswrapper[7484]: I0312 20:54:44.477282 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_954fe7f9-e138-49ab-ab8e-504b75914100/installer/0.log" Mar 12 20:54:44.477399 master-0 kubenswrapper[7484]: I0312 20:54:44.477351 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:54:44.546984 master-0 kubenswrapper[7484]: I0312 20:54:44.545826 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-var-lock\") pod \"954fe7f9-e138-49ab-ab8e-504b75914100\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " Mar 12 20:54:44.546984 master-0 kubenswrapper[7484]: I0312 20:54:44.545918 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-kubelet-dir\") pod \"954fe7f9-e138-49ab-ab8e-504b75914100\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " Mar 12 20:54:44.546984 master-0 kubenswrapper[7484]: I0312 20:54:44.545979 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954fe7f9-e138-49ab-ab8e-504b75914100-kube-api-access\") pod \"954fe7f9-e138-49ab-ab8e-504b75914100\" (UID: \"954fe7f9-e138-49ab-ab8e-504b75914100\") " Mar 12 20:54:44.546984 master-0 kubenswrapper[7484]: I0312 20:54:44.546902 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-var-lock" (OuterVolumeSpecName: "var-lock") pod "954fe7f9-e138-49ab-ab8e-504b75914100" (UID: "954fe7f9-e138-49ab-ab8e-504b75914100"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:54:44.546984 master-0 kubenswrapper[7484]: I0312 20:54:44.546934 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "954fe7f9-e138-49ab-ab8e-504b75914100" (UID: "954fe7f9-e138-49ab-ab8e-504b75914100"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:54:44.555901 master-0 kubenswrapper[7484]: I0312 20:54:44.555797 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/954fe7f9-e138-49ab-ab8e-504b75914100-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "954fe7f9-e138-49ab-ab8e-504b75914100" (UID: "954fe7f9-e138-49ab-ab8e-504b75914100"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:54:44.647501 master-0 kubenswrapper[7484]: I0312 20:54:44.647438 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:54:44.647501 master-0 kubenswrapper[7484]: I0312 20:54:44.647492 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/954fe7f9-e138-49ab-ab8e-504b75914100-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:54:44.647501 master-0 kubenswrapper[7484]: I0312 20:54:44.647503 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/954fe7f9-e138-49ab-ab8e-504b75914100-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:54:44.692456 master-0 kubenswrapper[7484]: E0312 20:54:44.692376 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:54:45.065802 master-0 kubenswrapper[7484]: I0312 20:54:45.065711 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-qfbrj_07542516-49c8-4e20-9b97-798fbff850a5/kube-storage-version-migrator-operator/1.log" Mar 12 20:54:45.072684 master-0 kubenswrapper[7484]: I0312 20:54:45.072629 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_954fe7f9-e138-49ab-ab8e-504b75914100/installer/0.log" Mar 12 20:54:45.072914 master-0 kubenswrapper[7484]: I0312 20:54:45.072849 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 20:54:45.076580 master-0 kubenswrapper[7484]: I0312 20:54:45.076516 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-jwthf_15ebfbd8-0782-431a-88a3-83af328498d2/openshift-apiserver-operator/1.log" Mar 12 20:54:45.086102 master-0 kubenswrapper[7484]: I0312 20:54:45.086034 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/1.log" Mar 12 20:54:45.087868 master-0 kubenswrapper[7484]: I0312 20:54:45.087758 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/0.log" Mar 12 20:54:45.100055 master-0 kubenswrapper[7484]: I0312 20:54:45.099989 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-9j7rx_a3bebf49-1d92-4353-b84c-91ed86b7bb94/authentication-operator/1.log" Mar 12 20:54:45.103903 master-0 kubenswrapper[7484]: I0312 20:54:45.103852 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/0.log" Mar 12 20:54:45.110377 master-0 kubenswrapper[7484]: I0312 20:54:45.110330 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-f62j6_a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/service-ca-operator/1.log" Mar 12 20:54:45.114799 master-0 kubenswrapper[7484]: I0312 20:54:45.114761 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/0.log" Mar 12 20:54:45.118141 master-0 kubenswrapper[7484]: I0312 20:54:45.118102 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/0.log" Mar 12 20:54:45.125240 master-0 kubenswrapper[7484]: I0312 20:54:45.125201 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-62t2f_fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/network-operator/0.log" Mar 12 20:54:45.132654 master-0 kubenswrapper[7484]: I0312 20:54:45.132596 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/0.log" Mar 12 20:54:45.143144 master-0 kubenswrapper[7484]: I0312 20:54:45.143085 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/0.log" Mar 12 20:54:46.453266 master-0 kubenswrapper[7484]: E0312 20:54:46.453031 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-jblsg.189c333607ce17bd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-jblsg,UID:567a9a33-1a82-4c48-b541-7e0eaae11f57,APIVersion:v1,ResourceVersion:8920,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 14.413s (14.413s including waiting). Image size: 1221753567 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:51:00.429330365 +0000 UTC m=+72.914599197,LastTimestamp:2026-03-12 20:51:00.429330365 +0000 UTC m=+72.914599197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:54:47.807205 master-0 kubenswrapper[7484]: I0312 20:54:47.807128 7484 scope.go:117] "RemoveContainer" containerID="53c0edcd8673398e4384f928bbaa2737b8e228fa73c0aad115798fc1550e14b6" Mar 12 20:54:48.437326 master-0 kubenswrapper[7484]: I0312 20:54:48.437122 7484 status_manager.go:851] "Failed to get status for pod" podUID="a35e2486-4d5e-43e5-89c0-c562002717bb" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 12 20:54:51.262500 master-0 kubenswrapper[7484]: E0312 20:54:51.262426 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:54:54.138475 master-0 kubenswrapper[7484]: E0312 20:54:54.138320 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="367123ca-5a21-415c-8ac2-6d875696536b" Mar 12 20:54:54.223592 master-0 kubenswrapper[7484]: I0312 20:54:54.223498 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:54:55.143592 master-0 kubenswrapper[7484]: E0312 20:54:55.143448 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4rthf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/redhat-operators-lbgrl" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" Mar 12 20:54:55.229269 master-0 kubenswrapper[7484]: I0312 20:54:55.229202 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:54:56.867358 master-0 kubenswrapper[7484]: E0312 20:54:56.867270 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:01.264638 master-0 kubenswrapper[7484]: E0312 20:55:01.264330 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:01.264638 master-0 kubenswrapper[7484]: E0312 20:55:01.264615 7484 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 20:55:01.693622 master-0 kubenswrapper[7484]: E0312 20:55:01.693504 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:55:14.396147 master-0 kubenswrapper[7484]: I0312 20:55:14.396086 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/1.log" Mar 12 20:55:14.396899 master-0 kubenswrapper[7484]: I0312 20:55:14.396855 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/0.log" Mar 12 20:55:14.396964 master-0 kubenswrapper[7484]: I0312 20:55:14.396938 7484 generic.go:334] "Generic (PLEG): container finished" podID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" containerID="0bd6a0b7ed84e5c57f80585b12035a2addd846361d63e97d5c4b6e34bb41dd20" exitCode=1 Mar 12 20:55:15.459674 master-0 kubenswrapper[7484]: E0312 20:55:15.459599 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.605s" Mar 12 20:55:15.460837 master-0 kubenswrapper[7484]: I0312 20:55:15.460771 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:55:15.461069 master-0 kubenswrapper[7484]: I0312 20:55:15.461039 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:15.461259 master-0 kubenswrapper[7484]: I0312 20:55:15.461230 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 20:55:15.461438 master-0 kubenswrapper[7484]: I0312 20:55:15.461397 7484 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="68051e57-967f-4f4a-8a22-e87d07cbc7ba" Mar 12 20:55:15.474754 master-0 kubenswrapper[7484]: I0312 20:55:15.474675 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 12 20:55:15.480360 master-0 kubenswrapper[7484]: I0312 20:55:15.480323 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:55:15.480604 master-0 kubenswrapper[7484]: I0312 20:55:15.480580 7484 status_manager.go:379] "Container startup changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6" Mar 12 20:55:15.480719 master-0 kubenswrapper[7484]: I0312 20:55:15.480703 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:15.480868 master-0 kubenswrapper[7484]: I0312 20:55:15.480835 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" event={"ID":"96bd86df-2101-47f5-844b-1332261c66f1","Type":"ContainerDied","Data":"e6ccd74a2af6fdce722a0e3dca22b3f124868515fcf641e0b36f66e322f8d4c3"} Mar 12 20:55:15.481030 master-0 kubenswrapper[7484]: I0312 20:55:15.480977 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerDied","Data":"28c691afcb8a45cb348e1216142781244b93a45eaf7cbab2716a18bf342b0dc8"} Mar 12 20:55:15.481167 master-0 kubenswrapper[7484]: I0312 20:55:15.481146 7484 status_manager.go:317] "Container readiness changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6" Mar 12 20:55:15.481280 master-0 kubenswrapper[7484]: I0312 20:55:15.481263 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:15.481380 master-0 kubenswrapper[7484]: I0312 20:55:15.481364 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:55:15.481499 master-0 kubenswrapper[7484]: I0312 20:55:15.481480 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 12 20:55:15.481608 master-0 kubenswrapper[7484]: I0312 20:55:15.481586 7484 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="68051e57-967f-4f4a-8a22-e87d07cbc7ba" Mar 12 20:55:15.482018 master-0 kubenswrapper[7484]: I0312 20:55:15.481997 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:55:15.482154 master-0 kubenswrapper[7484]: I0312 20:55:15.482129 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" event={"ID":"784599a3-a2ac-46ac-a4b7-9439704646cc","Type":"ContainerDied","Data":"ab706de1955bf19700e84d8f799385030b60c4a92c4860f12c06db2b3816fd99"} Mar 12 20:55:15.482534 master-0 kubenswrapper[7484]: I0312 20:55:15.482477 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 20:55:15.482614 master-0 kubenswrapper[7484]: I0312 20:55:15.482549 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:55:15.483086 master-0 kubenswrapper[7484]: I0312 20:55:15.483059 7484 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" containerID="cri-o://90f6df2cd5378a3ebab865fb719c69e38e48496ca3cd635c80da9e8ec49ce434" Mar 12 20:55:15.483204 master-0 kubenswrapper[7484]: I0312 20:55:15.483187 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:55:15.483293 master-0 kubenswrapper[7484]: I0312 20:55:15.483279 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:55:15.483395 master-0 kubenswrapper[7484]: I0312 20:55:15.483377 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:55:15.483550 master-0 kubenswrapper[7484]: I0312 20:55:15.483531 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 20:55:15.483659 master-0 kubenswrapper[7484]: I0312 20:55:15.483636 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" event={"ID":"a3bebf49-1d92-4353-b84c-91ed86b7bb94","Type":"ContainerDied","Data":"4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6"} Mar 12 20:55:15.483774 master-0 kubenswrapper[7484]: I0312 20:55:15.483758 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 20:55:15.483916 master-0 kubenswrapper[7484]: I0312 20:55:15.483876 7484 scope.go:117] "RemoveContainer" containerID="4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6" Mar 12 20:55:15.484026 master-0 kubenswrapper[7484]: I0312 20:55:15.483885 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" event={"ID":"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6","Type":"ContainerDied","Data":"d9fa8a123cfb8c14404c75a08b2365da17bc3d4b0cf2e193ac612689b8a4fc37"} Mar 12 20:55:15.484081 master-0 kubenswrapper[7484]: I0312 20:55:15.484032 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerStarted","Data":"47c0e0d21aabebc91fcbee939e9b068c6a5287ab73aa0a38e830a0c4a7aa5051"} Mar 12 20:55:15.484081 master-0 kubenswrapper[7484]: I0312 20:55:15.484056 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:15.484081 master-0 kubenswrapper[7484]: I0312 20:55:15.484072 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" event={"ID":"a3bebf49-1d92-4353-b84c-91ed86b7bb94","Type":"ContainerStarted","Data":"65753e4931b3081b10e537c0401b4155fdbc512202e120631ec6b784c53ee11c"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484086 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" event={"ID":"15ebfbd8-0782-431a-88a3-83af328498d2","Type":"ContainerStarted","Data":"ac220be40864e46bcbfeebc937d699a58348f8eb40ed949885e1f1fa2e71ed44"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484102 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484120 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484133 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" event={"ID":"07542516-49c8-4e20-9b97-798fbff850a5","Type":"ContainerStarted","Data":"ded70f8c305f91b4cd97482dbdf153ec9254b0cfdc370f5b14f5e7f5ee654d15"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484149 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"954fe7f9-e138-49ab-ab8e-504b75914100","Type":"ContainerDied","Data":"41e5296df7c3d4b1110f31058e02c84e5cd9852b203025b79d16be32d4b3de88"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484164 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"d87061e77c3511fa3d10d439abd7fc19b87e09c759be9ed2d0d6d0851d1c2c5d"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484180 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerDied","Data":"ae373579849ec0d4a33d66c2a3f6f43fccdff39968b29197dcdc4792d7cd63f3"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484200 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerDied","Data":"1726ad62deed5adf886b68145fe6223edb7fe9f83fb593561c0b8bdb5aef13cf"} Mar 12 20:55:15.484233 master-0 kubenswrapper[7484]: I0312 20:55:15.484222 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerDied","Data":"e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484242 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" event={"ID":"e624e623-6d59-444d-b548-165fa5fd2581","Type":"ContainerDied","Data":"2d7932f9200cfcc46a818b87f2e758dc323d7be1734436d6a1a8927b3aea1adf"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484485 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484574 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerDied","Data":"60173c0f9984162f24ad65c25f3ae119353e5fb646ea28da5079828f5c237197"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484592 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerDied","Data":"5932e7f75755d53b1d311f0b9e66cf21d66d861e9615083a39ac924565528bfd"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484609 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" event={"ID":"e624e623-6d59-444d-b548-165fa5fd2581","Type":"ContainerStarted","Data":"39d3c428744e31947d0aba2cc71c1c8335e2ced3049d8e6b24468cee1c398ffb"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484623 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerDied","Data":"36186e847a1c7ad015db1d456eab6f7fe52723f5ba9629a902598f1f75fcfbe7"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484639 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" event={"ID":"6d28f095-032b-47d4-b808-1502deeffee5","Type":"ContainerDied","Data":"90f6df2cd5378a3ebab865fb719c69e38e48496ca3cd635c80da9e8ec49ce434"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484653 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" event={"ID":"07542516-49c8-4e20-9b97-798fbff850a5","Type":"ContainerDied","Data":"ded70f8c305f91b4cd97482dbdf153ec9254b0cfdc370f5b14f5e7f5ee654d15"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484669 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerDied","Data":"47c0e0d21aabebc91fcbee939e9b068c6a5287ab73aa0a38e830a0c4a7aa5051"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484682 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" event={"ID":"a3bebf49-1d92-4353-b84c-91ed86b7bb94","Type":"ContainerDied","Data":"65753e4931b3081b10e537c0401b4155fdbc512202e120631ec6b784c53ee11c"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484698 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" event={"ID":"15ebfbd8-0782-431a-88a3-83af328498d2","Type":"ContainerDied","Data":"ac220be40864e46bcbfeebc937d699a58348f8eb40ed949885e1f1fa2e71ed44"} Mar 12 20:55:15.484706 master-0 kubenswrapper[7484]: I0312 20:55:15.484715 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" event={"ID":"5471994f-769e-4124-b7d0-01f5358fc18f","Type":"ContainerStarted","Data":"a84299e61aaa1595e3e07b0769d34f43309447a83e058608971fd9878868932d"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484734 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" event={"ID":"07542516-49c8-4e20-9b97-798fbff850a5","Type":"ContainerStarted","Data":"dd504a614de5e550f9072528b6c01840da9215811b43491a201cce7cb8c925b2"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484748 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" event={"ID":"96bd86df-2101-47f5-844b-1332261c66f1","Type":"ContainerStarted","Data":"249a7dffa361592f6c3fc3dfb8d871762e2347411c14fdf281e698f89aa84b04"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484761 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"954fe7f9-e138-49ab-ab8e-504b75914100","Type":"ContainerDied","Data":"53ca9cb8afb78daa40b60fb8598538d996992c55bbb55bf6668f862728b14188"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484774 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ca9cb8afb78daa40b60fb8598538d996992c55bbb55bf6668f862728b14188" Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484787 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" event={"ID":"15ebfbd8-0782-431a-88a3-83af328498d2","Type":"ContainerStarted","Data":"e30269190a498d005bfdfd571d5482a0e8b4091c328fc5801393ccfec9968c4e"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484799 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484834 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerStarted","Data":"d768bc84b40192023bb465579879b2b58033844ecac405b3a22bcb789eb76d17"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484847 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" event={"ID":"784599a3-a2ac-46ac-a4b7-9439704646cc","Type":"ContainerStarted","Data":"a4633cfb7d2ad7f15514161df19eedc1d6845ebaf43de93b15155efa464819c1"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484862 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" event={"ID":"6d28f095-032b-47d4-b808-1502deeffee5","Type":"ContainerStarted","Data":"59b4ecaa3eedf20f90ff4f437a227a7eff0e617269f5faf6807fb533207b0134"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484875 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" event={"ID":"a3bebf49-1d92-4353-b84c-91ed86b7bb94","Type":"ContainerStarted","Data":"756d13f35765c4b9f3b369f1d336b59d4b4e9cf7121b9f568dcb3f14475a2f8f"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484887 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"0bd6a0b7ed84e5c57f80585b12035a2addd846361d63e97d5c4b6e34bb41dd20"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484899 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerStarted","Data":"29605d6c0d6bf29478ff9cad55789098714848ec2911515b3a1ba1a6b740cc37"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484911 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerStarted","Data":"083e8e2171f84572bdd5f30426ffba317f16817f3ae58d7c00019c197700b69d"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484923 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"72247b0dd06b6af33787ec8f35afadef48c9b0d4221e98fe5435e01a0186d2bf"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484936 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerStarted","Data":"56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484951 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" event={"ID":"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c","Type":"ContainerStarted","Data":"1d13c664a16a834bb594ce779624d3af44ce1b13763cae9c9fac074c11de4252"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484965 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" event={"ID":"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6","Type":"ContainerStarted","Data":"72fca1fe5edaa514a27832ab602fe41af2b798cb5366c953a186e585a0605c57"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.484980 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerStarted","Data":"26bae4b1151179f8943350ed41cce4211f30fc7d0bc576d35eb657f821dc0907"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485018 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerStarted","Data":"41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485033 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"908a8cc2f3bc351202dab9b410d70888335d0f357ad01e6cdd7f4cdf90adf703"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485047 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"7fd269d6a8eb44e1a4790cb72966b4a0534f7af1aa471591ccb71a946b3ca40d"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485063 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"f73db7800402cb358e0d79e90095c60120f55db64b8d66594c7d386be4916a3c"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485079 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"0b1ad30ea0b6c41c6f1eb7bd3de3eda3e9f404e7c25c08138d7b4b1893fec5eb"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485096 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"d69ef5a9682c286db49162800e6bbc8a372fbb8bc9c781af56f0f61a5109903e"} Mar 12 20:55:15.485296 master-0 kubenswrapper[7484]: I0312 20:55:15.485112 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerDied","Data":"0bd6a0b7ed84e5c57f80585b12035a2addd846361d63e97d5c4b6e34bb41dd20"} Mar 12 20:55:15.486490 master-0 kubenswrapper[7484]: I0312 20:55:15.486077 7484 scope.go:117] "RemoveContainer" containerID="0bd6a0b7ed84e5c57f80585b12035a2addd846361d63e97d5c4b6e34bb41dd20" Mar 12 20:55:15.486490 master-0 kubenswrapper[7484]: E0312 20:55:15.486321 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 20:55:15.516269 master-0 kubenswrapper[7484]: I0312 20:55:15.516198 7484 scope.go:117] "RemoveContainer" containerID="02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02" Mar 12 20:55:15.548601 master-0 kubenswrapper[7484]: I0312 20:55:15.548533 7484 scope.go:117] "RemoveContainer" containerID="803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf" Mar 12 20:55:15.573002 master-0 kubenswrapper[7484]: I0312 20:55:15.572927 7484 scope.go:117] "RemoveContainer" containerID="0baf639c5d46bafa134b35ec6bda1e04194915bf6f2fc74defffc294b859ab5d" Mar 12 20:55:15.602914 master-0 kubenswrapper[7484]: I0312 20:55:15.602734 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-94rll" podStartSLOduration=247.904104986 podStartE2EDuration="4m30.602702557s" podCreationTimestamp="2026-03-12 20:50:45 +0000 UTC" firstStartedPulling="2026-03-12 20:50:47.030614136 +0000 UTC m=+59.515882938" lastFinishedPulling="2026-03-12 20:51:09.729211697 +0000 UTC m=+82.214480509" observedRunningTime="2026-03-12 20:55:15.600162315 +0000 UTC m=+328.085431157" watchObservedRunningTime="2026-03-12 20:55:15.602702557 +0000 UTC m=+328.087971369" Mar 12 20:55:15.603568 master-0 kubenswrapper[7484]: I0312 20:55:15.603507 7484 scope.go:117] "RemoveContainer" containerID="e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9" Mar 12 20:55:15.631710 master-0 kubenswrapper[7484]: I0312 20:55:15.631618 7484 scope.go:117] "RemoveContainer" containerID="02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02" Mar 12 20:55:15.632396 master-0 kubenswrapper[7484]: E0312 20:55:15.632354 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02\": container with ID starting with 02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02 not found: ID does not exist" containerID="02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02" Mar 12 20:55:15.632580 master-0 kubenswrapper[7484]: I0312 20:55:15.632540 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02"} err="failed to get container status \"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02\": rpc error: code = NotFound desc = could not find container \"02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02\": container with ID starting with 02ac5a41fa86c8da3e61fb0bb8e9e0588bab913b801b90fc424e7e3abaf59e02 not found: ID does not exist" Mar 12 20:55:15.632728 master-0 kubenswrapper[7484]: I0312 20:55:15.632708 7484 scope.go:117] "RemoveContainer" containerID="803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf" Mar 12 20:55:15.637947 master-0 kubenswrapper[7484]: E0312 20:55:15.637347 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf\": container with ID starting with 803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf not found: ID does not exist" containerID="803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf" Mar 12 20:55:15.637947 master-0 kubenswrapper[7484]: I0312 20:55:15.637411 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf"} err="failed to get container status \"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf\": rpc error: code = NotFound desc = could not find container \"803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf\": container with ID starting with 803468c92847be9ff6518c968fe3dd17c7c93344e029130c1bfa6744bc5862bf not found: ID does not exist" Mar 12 20:55:15.637947 master-0 kubenswrapper[7484]: I0312 20:55:15.637445 7484 scope.go:117] "RemoveContainer" containerID="31932c207919d9fa7ba649bcc3b67b43788d2b23969a14459b9233c510ac6567" Mar 12 20:55:15.664767 master-0 kubenswrapper[7484]: I0312 20:55:15.664223 7484 scope.go:117] "RemoveContainer" containerID="a33a2903577092cf3a1f9c908ef309b6542edd2a9918f17c9c5bfb3802991a1e" Mar 12 20:55:15.693647 master-0 kubenswrapper[7484]: I0312 20:55:15.693592 7484 scope.go:117] "RemoveContainer" containerID="4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6" Mar 12 20:55:15.694158 master-0 kubenswrapper[7484]: E0312 20:55:15.694109 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6\": container with ID starting with 4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6 not found: ID does not exist" containerID="4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6" Mar 12 20:55:15.694279 master-0 kubenswrapper[7484]: I0312 20:55:15.694148 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6"} err="failed to get container status \"4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6\": rpc error: code = NotFound desc = could not find container \"4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6\": container with ID starting with 4f12cf8d8d8d0087f11b9de5f5568886404da4081c2e2727f07a95ca8191d1c6 not found: ID does not exist" Mar 12 20:55:15.694279 master-0 kubenswrapper[7484]: I0312 20:55:15.694173 7484 scope.go:117] "RemoveContainer" containerID="2e532f48874103782c7daee8f162358860ddd2173af37648f345faae82db17a2" Mar 12 20:55:15.725325 master-0 kubenswrapper[7484]: I0312 20:55:15.725295 7484 scope.go:117] "RemoveContainer" containerID="e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9" Mar 12 20:55:15.725977 master-0 kubenswrapper[7484]: E0312 20:55:15.725952 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9\": container with ID starting with e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9 not found: ID does not exist" containerID="e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9" Mar 12 20:55:15.726114 master-0 kubenswrapper[7484]: I0312 20:55:15.726084 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9"} err="failed to get container status \"e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9\": rpc error: code = NotFound desc = could not find container \"e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9\": container with ID starting with e29fe78e5f8c5908626647267abeb52f63244162e122261e67a929d3a95210d9 not found: ID does not exist" Mar 12 20:55:15.921134 master-0 kubenswrapper[7484]: I0312 20:55:15.920997 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jblsg" podStartSLOduration=249.473203986 podStartE2EDuration="4m31.920969446s" podCreationTimestamp="2026-03-12 20:50:44 +0000 UTC" firstStartedPulling="2026-03-12 20:50:46.015651137 +0000 UTC m=+58.500919939" lastFinishedPulling="2026-03-12 20:51:08.463416587 +0000 UTC m=+80.948685399" observedRunningTime="2026-03-12 20:55:15.91709287 +0000 UTC m=+328.402361762" watchObservedRunningTime="2026-03-12 20:55:15.920969446 +0000 UTC m=+328.406238288" Mar 12 20:55:16.005517 master-0 kubenswrapper[7484]: I0312 20:55:16.005115 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" podStartSLOduration=266.913416401 podStartE2EDuration="4m30.005082495s" podCreationTimestamp="2026-03-12 20:50:46 +0000 UTC" firstStartedPulling="2026-03-12 20:50:47.164289641 +0000 UTC m=+59.649558443" lastFinishedPulling="2026-03-12 20:50:50.255955735 +0000 UTC m=+62.741224537" observedRunningTime="2026-03-12 20:55:16.002574883 +0000 UTC m=+328.487843755" watchObservedRunningTime="2026-03-12 20:55:16.005082495 +0000 UTC m=+328.490351337" Mar 12 20:55:16.113371 master-0 kubenswrapper[7484]: I0312 20:55:16.113293 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 20:55:16.117858 master-0 kubenswrapper[7484]: I0312 20:55:16.117745 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 12 20:55:16.154142 master-0 kubenswrapper[7484]: I0312 20:55:16.154063 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 20:55:16.157858 master-0 kubenswrapper[7484]: I0312 20:55:16.157743 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 12 20:55:16.170866 master-0 kubenswrapper[7484]: I0312 20:55:16.169152 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:16.251377 master-0 kubenswrapper[7484]: I0312 20:55:16.251046 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-66qvj" podStartSLOduration=247.462035212 podStartE2EDuration="4m30.251015844s" podCreationTimestamp="2026-03-12 20:50:46 +0000 UTC" firstStartedPulling="2026-03-12 20:50:48.079299783 +0000 UTC m=+60.564568595" lastFinishedPulling="2026-03-12 20:51:10.868280385 +0000 UTC m=+83.353549227" observedRunningTime="2026-03-12 20:55:16.248791489 +0000 UTC m=+328.734060331" watchObservedRunningTime="2026-03-12 20:55:16.251015844 +0000 UTC m=+328.736284656" Mar 12 20:55:16.414907 master-0 kubenswrapper[7484]: I0312 20:55:16.414852 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:16.415992 master-0 kubenswrapper[7484]: I0312 20:55:16.415938 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/1.log" Mar 12 20:55:16.424228 master-0 kubenswrapper[7484]: I0312 20:55:16.423343 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-qfbrj_07542516-49c8-4e20-9b97-798fbff850a5/kube-storage-version-migrator-operator/1.log" Mar 12 20:55:16.427620 master-0 kubenswrapper[7484]: I0312 20:55:16.427572 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-f62j6_a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/service-ca-operator/1.log" Mar 12 20:55:16.431390 master-0 kubenswrapper[7484]: I0312 20:55:16.431336 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-9j7rx_a3bebf49-1d92-4353-b84c-91ed86b7bb94/authentication-operator/1.log" Mar 12 20:55:16.434838 master-0 kubenswrapper[7484]: I0312 20:55:16.434756 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-jwthf_15ebfbd8-0782-431a-88a3-83af328498d2/openshift-apiserver-operator/1.log" Mar 12 20:55:16.441255 master-0 kubenswrapper[7484]: I0312 20:55:16.441206 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/1.log" Mar 12 20:55:16.441997 master-0 kubenswrapper[7484]: I0312 20:55:16.441899 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:55:16.442210 master-0 kubenswrapper[7484]: I0312 20:55:16.442179 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:55:16.444568 master-0 kubenswrapper[7484]: I0312 20:55:16.444398 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 20:55:16.451340 master-0 kubenswrapper[7484]: I0312 20:55:16.451213 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 20:55:17.742918 master-0 kubenswrapper[7484]: I0312 20:55:17.742874 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bec49ae-0c52-451f-8d8d-6e822cd335cc" path="/var/lib/kubelet/pods/5bec49ae-0c52-451f-8d8d-6e822cd335cc/volumes" Mar 12 20:55:17.744039 master-0 kubenswrapper[7484]: I0312 20:55:17.744021 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35e2486-4d5e-43e5-89c0-c562002717bb" path="/var/lib/kubelet/pods/a35e2486-4d5e-43e5-89c0-c562002717bb/volumes" Mar 12 20:55:18.146145 master-0 kubenswrapper[7484]: E0312 20:55:18.146013 7484 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:55:18.146145 master-0 kubenswrapper[7484]: E0312 20:55:18.146134 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access podName:367123ca-5a21-415c-8ac2-6d875696536b nodeName:}" failed. No retries permitted until 2026-03-12 20:55:50.146112409 +0000 UTC m=+362.631381231 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access") pod "installer-2-master-0" (UID: "367123ca-5a21-415c-8ac2-6d875696536b") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:55:18.246635 master-0 kubenswrapper[7484]: E0312 20:55:18.246558 7484 projected.go:194] Error preparing data for projected volume kube-api-access-4rthf for pod openshift-marketplace/redhat-operators-lbgrl: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:55:18.246971 master-0 kubenswrapper[7484]: E0312 20:55:18.246675 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf podName:2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0 nodeName:}" failed. No retries permitted until 2026-03-12 20:55:50.246649242 +0000 UTC m=+362.731918054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4rthf" (UniqueName: "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf") pod "redhat-operators-lbgrl" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 12 20:55:18.696094 master-0 kubenswrapper[7484]: E0312 20:55:18.695988 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 20:55:18.701478 master-0 kubenswrapper[7484]: I0312 20:55:18.701438 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:19.415963 master-0 kubenswrapper[7484]: I0312 20:55:19.415850 7484 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:20.458152 master-0 kubenswrapper[7484]: E0312 20:55:20.456951 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-66qvj.189c33360d1c0398 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-66qvj,UID:d6eace9f-a52d-4570-a932-959538e1f2bc,APIVersion:v1,ResourceVersion:9033,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 12.438s (12.438s including waiting). Image size: 1231028434 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:51:00.518323096 +0000 UTC m=+73.003591938,LastTimestamp:2026-03-12 20:51:00.518323096 +0000 UTC m=+73.003591938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 20:55:21.120283 master-0 kubenswrapper[7484]: I0312 20:55:21.120194 7484 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-9j7rx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 12 20:55:21.120283 master-0 kubenswrapper[7484]: I0312 20:55:21.120285 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" podUID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 12 20:55:21.169233 master-0 kubenswrapper[7484]: I0312 20:55:21.169107 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:21.201503 master-0 kubenswrapper[7484]: I0312 20:55:21.201391 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:21.354519 master-0 kubenswrapper[7484]: E0312 20:55:21.354189 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:55:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:55:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:55:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T20:55:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1fce8b5c6b0206ecb4ddc7de47062bed853b88d4e34415e9e5a2a6bc99cf6aad\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:8bd0ffcb6caac4a5d03346b5f7cdfaf2f6f9f9d0a30deff8f216e6cb63b0ee75\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1282704097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:08bf2da4079dafb9d9fc0718c48ed509adab6b030e9c85e3bbd21d2702ab894e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:cf0470f46da209c10a63329feddb7afca3d04a9084fbf1a0755a3302e5c102ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221753567},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:23.674209 master-0 kubenswrapper[7484]: I0312 20:55:23.674154 7484 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-xh6r9 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.16:8443/healthz\": net/http: TLS handshake timeout" start-of-body= Mar 12 20:55:23.674745 master-0 kubenswrapper[7484]: I0312 20:55:23.674242 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" podUID="5471994f-769e-4124-b7d0-01f5358fc18f" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.16:8443/healthz\": net/http: TLS handshake timeout" Mar 12 20:55:25.493187 master-0 kubenswrapper[7484]: I0312 20:55:25.492968 7484 generic.go:334] "Generic (PLEG): container finished" podID="2604b035-853c-42b7-a562-07d46178868a" containerID="6afc544c34ddbc5e6039dbdbeff607333e002100669f75e0bf5ff219b035f729" exitCode=0 Mar 12 20:55:25.493187 master-0 kubenswrapper[7484]: I0312 20:55:25.493062 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" event={"ID":"2604b035-853c-42b7-a562-07d46178868a","Type":"ContainerDied","Data":"6afc544c34ddbc5e6039dbdbeff607333e002100669f75e0bf5ff219b035f729"} Mar 12 20:55:25.493955 master-0 kubenswrapper[7484]: I0312 20:55:25.493882 7484 scope.go:117] "RemoveContainer" containerID="6afc544c34ddbc5e6039dbdbeff607333e002100669f75e0bf5ff219b035f729" Mar 12 20:55:26.190961 master-0 kubenswrapper[7484]: I0312 20:55:26.190887 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:26.504116 master-0 kubenswrapper[7484]: I0312 20:55:26.503875 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" event={"ID":"2604b035-853c-42b7-a562-07d46178868a","Type":"ContainerStarted","Data":"4c1c1c1b8851a87caaa47906af218c648432043d5537dde4d7c6aa9df599a39a"} Mar 12 20:55:27.737036 master-0 kubenswrapper[7484]: I0312 20:55:27.736972 7484 scope.go:117] "RemoveContainer" containerID="0bd6a0b7ed84e5c57f80585b12035a2addd846361d63e97d5c4b6e34bb41dd20" Mar 12 20:55:28.496023 master-0 kubenswrapper[7484]: E0312 20:55:28.495798 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:28.520048 master-0 kubenswrapper[7484]: I0312 20:55:28.520008 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/1.log" Mar 12 20:55:28.520403 master-0 kubenswrapper[7484]: I0312 20:55:28.520327 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1"} Mar 12 20:55:29.155823 master-0 kubenswrapper[7484]: I0312 20:55:29.155667 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 12 20:55:29.323717 master-0 kubenswrapper[7484]: I0312 20:55:29.323588 7484 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-zsd76 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Mar 12 20:55:29.324111 master-0 kubenswrapper[7484]: I0312 20:55:29.323715 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" podUID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Mar 12 20:55:29.415928 master-0 kubenswrapper[7484]: I0312 20:55:29.415549 7484 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:29.530675 master-0 kubenswrapper[7484]: I0312 20:55:29.530573 7484 generic.go:334] "Generic (PLEG): container finished" podID="135ec6f3-fbc0-4840-a4b1-c1124c705161" containerID="15d0d26804c9c80b6799cf88166882aaa90b3995069ea002665cca02980190e3" exitCode=0 Mar 12 20:55:29.530675 master-0 kubenswrapper[7484]: I0312 20:55:29.530648 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" event={"ID":"135ec6f3-fbc0-4840-a4b1-c1124c705161","Type":"ContainerDied","Data":"15d0d26804c9c80b6799cf88166882aaa90b3995069ea002665cca02980190e3"} Mar 12 20:55:29.531339 master-0 kubenswrapper[7484]: I0312 20:55:29.531301 7484 scope.go:117] "RemoveContainer" containerID="15d0d26804c9c80b6799cf88166882aaa90b3995069ea002665cca02980190e3" Mar 12 20:55:29.534058 master-0 kubenswrapper[7484]: I0312 20:55:29.534006 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-69rp9_981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/cluster-node-tuning-operator/0.log" Mar 12 20:55:29.534180 master-0 kubenswrapper[7484]: I0312 20:55:29.534109 7484 generic.go:334] "Generic (PLEG): container finished" podID="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" containerID="ab35500d408324bc8f259a25814698a0950deafc4c75bcf972576200d718f280" exitCode=1 Mar 12 20:55:29.534359 master-0 kubenswrapper[7484]: I0312 20:55:29.534216 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" event={"ID":"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9","Type":"ContainerDied","Data":"ab35500d408324bc8f259a25814698a0950deafc4c75bcf972576200d718f280"} Mar 12 20:55:29.535238 master-0 kubenswrapper[7484]: I0312 20:55:29.534727 7484 scope.go:117] "RemoveContainer" containerID="ab35500d408324bc8f259a25814698a0950deafc4c75bcf972576200d718f280" Mar 12 20:55:29.537066 master-0 kubenswrapper[7484]: I0312 20:55:29.536984 7484 generic.go:334] "Generic (PLEG): container finished" podID="900228dd-2d21-4759-87da-b027b0134ad8" containerID="86833dd41b14e8094351920793b00866703e058d522b46fbdbf250fbcc14c834" exitCode=0 Mar 12 20:55:29.537169 master-0 kubenswrapper[7484]: I0312 20:55:29.537055 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" event={"ID":"900228dd-2d21-4759-87da-b027b0134ad8","Type":"ContainerDied","Data":"86833dd41b14e8094351920793b00866703e058d522b46fbdbf250fbcc14c834"} Mar 12 20:55:29.537737 master-0 kubenswrapper[7484]: I0312 20:55:29.537687 7484 scope.go:117] "RemoveContainer" containerID="86833dd41b14e8094351920793b00866703e058d522b46fbdbf250fbcc14c834" Mar 12 20:55:29.542565 master-0 kubenswrapper[7484]: I0312 20:55:29.541593 7484 generic.go:334] "Generic (PLEG): container finished" podID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerID="9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1" exitCode=0 Mar 12 20:55:29.542565 master-0 kubenswrapper[7484]: I0312 20:55:29.541649 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerDied","Data":"9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1"} Mar 12 20:55:29.542565 master-0 kubenswrapper[7484]: I0312 20:55:29.541715 7484 scope.go:117] "RemoveContainer" containerID="304a25d963544d2c18d9e9c47ad4423b6984ff4ce290c819f6e1953a03bd9e6b" Mar 12 20:55:29.542565 master-0 kubenswrapper[7484]: I0312 20:55:29.542211 7484 scope.go:117] "RemoveContainer" containerID="9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1" Mar 12 20:55:29.542565 master-0 kubenswrapper[7484]: E0312 20:55:29.542510 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-zsd76_openshift-config-operator(980191fe-c62c-4b9e-879c-38fa8ce0a58b)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" podUID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" Mar 12 20:55:29.560367 master-0 kubenswrapper[7484]: I0312 20:55:29.560301 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c" exitCode=0 Mar 12 20:55:29.560554 master-0 kubenswrapper[7484]: I0312 20:55:29.560502 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c"} Mar 12 20:55:29.561631 master-0 kubenswrapper[7484]: I0312 20:55:29.561581 7484 scope.go:117] "RemoveContainer" containerID="84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c" Mar 12 20:55:29.577250 master-0 kubenswrapper[7484]: I0312 20:55:29.577191 7484 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="bd647ed768dc3b1c577a2e60500ea1b4e6063ec0776cd15c9345ee26565e55c6" exitCode=0 Mar 12 20:55:29.577363 master-0 kubenswrapper[7484]: I0312 20:55:29.577246 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerDied","Data":"bd647ed768dc3b1c577a2e60500ea1b4e6063ec0776cd15c9345ee26565e55c6"} Mar 12 20:55:29.578031 master-0 kubenswrapper[7484]: I0312 20:55:29.577985 7484 scope.go:117] "RemoveContainer" containerID="bd647ed768dc3b1c577a2e60500ea1b4e6063ec0776cd15c9345ee26565e55c6" Mar 12 20:55:29.578410 master-0 kubenswrapper[7484]: E0312 20:55:29.578358 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-77899cf6d-kbwlh_openshift-cluster-olm-operator(226cb3a1-984f-4410-96e6-c007131dc074)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" podUID="226cb3a1-984f-4410-96e6-c007131dc074" Mar 12 20:55:29.591772 master-0 kubenswrapper[7484]: I0312 20:55:29.591736 7484 scope.go:117] "RemoveContainer" containerID="01e107c0f774c1f8391b548269ef79446449d21fef49690cb86fce489a21f185" Mar 12 20:55:29.967503 master-0 kubenswrapper[7484]: I0312 20:55:29.967425 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:55:30.597003 master-0 kubenswrapper[7484]: I0312 20:55:30.596929 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" event={"ID":"135ec6f3-fbc0-4840-a4b1-c1124c705161","Type":"ContainerStarted","Data":"46ded837719c01c62e0a027c72064dacb46bd2417ff8fe1a0f12a339ce0c296a"} Mar 12 20:55:30.601553 master-0 kubenswrapper[7484]: I0312 20:55:30.601497 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-69rp9_981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/cluster-node-tuning-operator/0.log" Mar 12 20:55:30.601744 master-0 kubenswrapper[7484]: I0312 20:55:30.601689 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" event={"ID":"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9","Type":"ContainerStarted","Data":"1152dcaad32a43ba9e378941f51d853a2e7fc508d86ad05335f3c348f68fdd30"} Mar 12 20:55:30.611713 master-0 kubenswrapper[7484]: I0312 20:55:30.611661 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" event={"ID":"900228dd-2d21-4759-87da-b027b0134ad8","Type":"ContainerStarted","Data":"1746524fbf252ae2860d518e4df6a02c7aaf28a067d9493a2d0daedd8741f97f"} Mar 12 20:55:30.617281 master-0 kubenswrapper[7484]: I0312 20:55:30.617248 7484 scope.go:117] "RemoveContainer" containerID="9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1" Mar 12 20:55:30.617553 master-0 kubenswrapper[7484]: E0312 20:55:30.617519 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-zsd76_openshift-config-operator(980191fe-c62c-4b9e-879c-38fa8ce0a58b)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" podUID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" Mar 12 20:55:30.623701 master-0 kubenswrapper[7484]: I0312 20:55:30.623642 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443"} Mar 12 20:55:31.355482 master-0 kubenswrapper[7484]: E0312 20:55:31.355345 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:32.118789 master-0 kubenswrapper[7484]: I0312 20:55:32.118607 7484 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-9j7rx container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 20:55:32.119875 master-0 kubenswrapper[7484]: I0312 20:55:32.118770 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" podUID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:32.322799 master-0 kubenswrapper[7484]: I0312 20:55:32.322643 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:55:32.323696 master-0 kubenswrapper[7484]: I0312 20:55:32.323629 7484 scope.go:117] "RemoveContainer" containerID="9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1" Mar 12 20:55:32.324135 master-0 kubenswrapper[7484]: E0312 20:55:32.324055 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-zsd76_openshift-config-operator(980191fe-c62c-4b9e-879c-38fa8ce0a58b)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" podUID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" Mar 12 20:55:32.857203 master-0 kubenswrapper[7484]: I0312 20:55:32.857073 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:33.579722 master-0 kubenswrapper[7484]: I0312 20:55:33.579658 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:33.643107 master-0 kubenswrapper[7484]: I0312 20:55:33.643028 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:36.421989 master-0 kubenswrapper[7484]: I0312 20:55:36.421874 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:36.428293 master-0 kubenswrapper[7484]: I0312 20:55:36.428229 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:39.163043 master-0 kubenswrapper[7484]: I0312 20:55:39.162881 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:55:40.734033 master-0 kubenswrapper[7484]: I0312 20:55:40.733946 7484 scope.go:117] "RemoveContainer" containerID="bd647ed768dc3b1c577a2e60500ea1b4e6063ec0776cd15c9345ee26565e55c6" Mar 12 20:55:41.356242 master-0 kubenswrapper[7484]: E0312 20:55:41.356101 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:41.530514 master-0 kubenswrapper[7484]: E0312 20:55:41.530428 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 12 20:55:41.700690 master-0 kubenswrapper[7484]: I0312 20:55:41.700471 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerStarted","Data":"eb233dad973c14b986649aa9671fed2fa87adb0d7e06e94ac63133ff5838cbbe"} Mar 12 20:55:45.733238 master-0 kubenswrapper[7484]: I0312 20:55:45.733159 7484 scope.go:117] "RemoveContainer" containerID="9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1" Mar 12 20:55:46.743743 master-0 kubenswrapper[7484]: I0312 20:55:46.743625 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerStarted","Data":"812a4d4164b66d6dc3ca8693d14eb3fcdb3c84deb2faed8cede318f4eacda9e5"} Mar 12 20:55:46.744999 master-0 kubenswrapper[7484]: I0312 20:55:46.744155 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:55:50.198948 master-0 kubenswrapper[7484]: I0312 20:55:50.198795 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:55:50.300144 master-0 kubenswrapper[7484]: I0312 20:55:50.300004 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:55:50.973488 master-0 kubenswrapper[7484]: I0312 20:55:50.973376 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 20:55:51.357563 master-0 kubenswrapper[7484]: E0312 20:55:51.357438 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 20:55:56.318940 master-0 kubenswrapper[7484]: I0312 20:55:56.318603 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"redhat-operators-lbgrl\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:55:56.325292 master-0 kubenswrapper[7484]: I0312 20:55:56.325218 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:55:56.432603 master-0 kubenswrapper[7484]: I0312 20:55:56.432542 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-v7qw9" Mar 12 20:55:56.442148 master-0 kubenswrapper[7484]: I0312 20:55:56.442049 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:55:56.628127 master-0 kubenswrapper[7484]: I0312 20:55:56.626106 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-xq8cf" Mar 12 20:55:56.651479 master-0 kubenswrapper[7484]: I0312 20:55:56.651424 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:55:56.717412 master-0 kubenswrapper[7484]: I0312 20:55:56.706624 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbgrl"] Mar 12 20:55:56.819451 master-0 kubenswrapper[7484]: I0312 20:55:56.817357 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgrl" event={"ID":"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0","Type":"ContainerStarted","Data":"5019ac6965eb599a3383c019f4aab04c33c0dd81cfbbc7c6cfee64daee23a77c"} Mar 12 20:55:57.154298 master-0 kubenswrapper[7484]: I0312 20:55:57.154237 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 12 20:55:57.160465 master-0 kubenswrapper[7484]: W0312 20:55:57.160407 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod367123ca_5a21_415c_8ac2_6d875696536b.slice/crio-37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07 WatchSource:0}: Error finding container 37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07: Status 404 returned error can't find the container with id 37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07 Mar 12 20:55:57.827959 master-0 kubenswrapper[7484]: I0312 20:55:57.827766 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"367123ca-5a21-415c-8ac2-6d875696536b","Type":"ContainerStarted","Data":"73ffa716ed0ceb1f05c1ae94138aa9510898a766a0ea47f5fb2644e437ab8da6"} Mar 12 20:55:57.827959 master-0 kubenswrapper[7484]: I0312 20:55:57.827887 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"367123ca-5a21-415c-8ac2-6d875696536b","Type":"ContainerStarted","Data":"37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07"} Mar 12 20:55:57.830701 master-0 kubenswrapper[7484]: I0312 20:55:57.830627 7484 generic.go:334] "Generic (PLEG): container finished" podID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerID="eaf7aa8e44258f4f840558778c91b488162a247bf91a5de122468ab4c31709a1" exitCode=0 Mar 12 20:55:57.830701 master-0 kubenswrapper[7484]: I0312 20:55:57.830692 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgrl" event={"ID":"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0","Type":"ContainerDied","Data":"eaf7aa8e44258f4f840558778c91b488162a247bf91a5de122468ab4c31709a1"} Mar 12 20:55:57.833353 master-0 kubenswrapper[7484]: I0312 20:55:57.833297 7484 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 20:55:57.852180 master-0 kubenswrapper[7484]: I0312 20:55:57.852062 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=309.852031176 podStartE2EDuration="5m9.852031176s" podCreationTimestamp="2026-03-12 20:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:55:57.847174185 +0000 UTC m=+370.332443067" watchObservedRunningTime="2026-03-12 20:55:57.852031176 +0000 UTC m=+370.337300008" Mar 12 20:55:58.517757 master-0 kubenswrapper[7484]: I0312 20:55:58.517575 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbgrl"] Mar 12 20:56:07.894686 master-0 kubenswrapper[7484]: I0312 20:56:07.894627 7484 generic.go:334] "Generic (PLEG): container finished" podID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerID="155a245c97e20cfbc205c0af51f466919919cf6426af23979fd21e15a0548eb4" exitCode=0 Mar 12 20:56:07.894686 master-0 kubenswrapper[7484]: I0312 20:56:07.894677 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgrl" event={"ID":"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0","Type":"ContainerDied","Data":"155a245c97e20cfbc205c0af51f466919919cf6426af23979fd21e15a0548eb4"} Mar 12 20:56:08.222332 master-0 kubenswrapper[7484]: I0312 20:56:08.222306 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:56:08.362931 master-0 kubenswrapper[7484]: I0312 20:56:08.362847 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-catalog-content\") pod \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " Mar 12 20:56:08.362931 master-0 kubenswrapper[7484]: I0312 20:56:08.362927 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") pod \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " Mar 12 20:56:08.363217 master-0 kubenswrapper[7484]: I0312 20:56:08.363064 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-utilities\") pod \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\" (UID: \"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0\") " Mar 12 20:56:08.364970 master-0 kubenswrapper[7484]: I0312 20:56:08.364926 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-utilities" (OuterVolumeSpecName: "utilities") pod "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 20:56:08.368614 master-0 kubenswrapper[7484]: I0312 20:56:08.368475 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf" (OuterVolumeSpecName: "kube-api-access-4rthf") pod "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0"). InnerVolumeSpecName "kube-api-access-4rthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:56:08.465649 master-0 kubenswrapper[7484]: I0312 20:56:08.465456 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rthf\" (UniqueName: \"kubernetes.io/projected/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-kube-api-access-4rthf\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:08.465649 master-0 kubenswrapper[7484]: I0312 20:56:08.465495 7484 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:08.523665 master-0 kubenswrapper[7484]: I0312 20:56:08.523562 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" (UID: "2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 20:56:08.566185 master-0 kubenswrapper[7484]: I0312 20:56:08.566130 7484 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:08.902568 master-0 kubenswrapper[7484]: I0312 20:56:08.902503 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbgrl" event={"ID":"2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0","Type":"ContainerDied","Data":"5019ac6965eb599a3383c019f4aab04c33c0dd81cfbbc7c6cfee64daee23a77c"} Mar 12 20:56:08.903140 master-0 kubenswrapper[7484]: I0312 20:56:08.902583 7484 scope.go:117] "RemoveContainer" containerID="155a245c97e20cfbc205c0af51f466919919cf6426af23979fd21e15a0548eb4" Mar 12 20:56:08.903140 master-0 kubenswrapper[7484]: I0312 20:56:08.902631 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbgrl" Mar 12 20:56:08.921869 master-0 kubenswrapper[7484]: I0312 20:56:08.921822 7484 scope.go:117] "RemoveContainer" containerID="eaf7aa8e44258f4f840558778c91b488162a247bf91a5de122468ab4c31709a1" Mar 12 20:56:08.963229 master-0 kubenswrapper[7484]: I0312 20:56:08.963165 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lbgrl"] Mar 12 20:56:08.968926 master-0 kubenswrapper[7484]: I0312 20:56:08.968877 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lbgrl"] Mar 12 20:56:09.743851 master-0 kubenswrapper[7484]: I0312 20:56:09.743740 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" path="/var/lib/kubelet/pods/2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0/volumes" Mar 12 20:56:15.990637 master-0 kubenswrapper[7484]: I0312 20:56:15.990437 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bq6pw"] Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: E0312 20:56:15.990797 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerName="extract-utilities" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.990868 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerName="extract-utilities" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: E0312 20:56:15.990890 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerName="extract-content" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.990907 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerName="extract-content" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: E0312 20:56:15.990936 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.990951 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: E0312 20:56:15.990977 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bec49ae-0c52-451f-8d8d-6e822cd335cc" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.990993 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bec49ae-0c52-451f-8d8d-6e822cd335cc" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: E0312 20:56:15.991016 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991032 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: E0312 20:56:15.991059 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954fe7f9-e138-49ab-ab8e-504b75914100" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991074 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="954fe7f9-e138-49ab-ab8e-504b75914100" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991263 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bec49ae-0c52-451f-8d8d-6e822cd335cc" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991289 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991313 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2b27ae-7c79-4e3a-beef-1c6d8b62b1c0" containerName="extract-content" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991329 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="954fe7f9-e138-49ab-ab8e-504b75914100" containerName="installer" Mar 12 20:56:15.991784 master-0 kubenswrapper[7484]: I0312 20:56:15.991359 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerName="installer" Mar 12 20:56:15.993002 master-0 kubenswrapper[7484]: I0312 20:56:15.992690 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:15.996193 master-0 kubenswrapper[7484]: I0312 20:56:15.996134 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-v7qw9" Mar 12 20:56:16.019829 master-0 kubenswrapper[7484]: I0312 20:56:16.016771 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bq6pw"] Mar 12 20:56:16.070205 master-0 kubenswrapper[7484]: I0312 20:56:16.070131 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-catalog-content\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.070467 master-0 kubenswrapper[7484]: I0312 20:56:16.070222 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-utilities\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.070467 master-0 kubenswrapper[7484]: I0312 20:56:16.070391 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqp9c\" (UniqueName: \"kubernetes.io/projected/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-kube-api-access-rqp9c\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.171748 master-0 kubenswrapper[7484]: I0312 20:56:16.171670 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-catalog-content\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.172180 master-0 kubenswrapper[7484]: I0312 20:56:16.171918 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-utilities\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.172180 master-0 kubenswrapper[7484]: I0312 20:56:16.171959 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqp9c\" (UniqueName: \"kubernetes.io/projected/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-kube-api-access-rqp9c\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.172614 master-0 kubenswrapper[7484]: I0312 20:56:16.172544 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-catalog-content\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.172800 master-0 kubenswrapper[7484]: I0312 20:56:16.172748 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-utilities\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.194768 master-0 kubenswrapper[7484]: I0312 20:56:16.194689 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqp9c\" (UniqueName: \"kubernetes.io/projected/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-kube-api-access-rqp9c\") pod \"redhat-operators-bq6pw\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.320369 master-0 kubenswrapper[7484]: I0312 20:56:16.320298 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bq6pw"] Mar 12 20:56:16.320720 master-0 kubenswrapper[7484]: I0312 20:56:16.320694 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:16.341828 master-0 kubenswrapper[7484]: I0312 20:56:16.341767 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2mrdc"] Mar 12 20:56:16.342778 master-0 kubenswrapper[7484]: I0312 20:56:16.342747 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.373299 master-0 kubenswrapper[7484]: I0312 20:56:16.373213 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mrdc"] Mar 12 20:56:16.479782 master-0 kubenswrapper[7484]: I0312 20:56:16.479700 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-utilities\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.479782 master-0 kubenswrapper[7484]: I0312 20:56:16.479787 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-catalog-content\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.480091 master-0 kubenswrapper[7484]: I0312 20:56:16.479933 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcsg\" (UniqueName: \"kubernetes.io/projected/514012c6-628d-4cbf-8a60-be70e3913366-kube-api-access-swcsg\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.580904 master-0 kubenswrapper[7484]: I0312 20:56:16.580703 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-catalog-content\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.580904 master-0 kubenswrapper[7484]: I0312 20:56:16.580796 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swcsg\" (UniqueName: \"kubernetes.io/projected/514012c6-628d-4cbf-8a60-be70e3913366-kube-api-access-swcsg\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.581226 master-0 kubenswrapper[7484]: I0312 20:56:16.581018 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-utilities\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.581498 master-0 kubenswrapper[7484]: I0312 20:56:16.581448 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-catalog-content\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.583530 master-0 kubenswrapper[7484]: I0312 20:56:16.581515 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-utilities\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.600157 master-0 kubenswrapper[7484]: I0312 20:56:16.600063 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swcsg\" (UniqueName: \"kubernetes.io/projected/514012c6-628d-4cbf-8a60-be70e3913366-kube-api-access-swcsg\") pod \"redhat-operators-2mrdc\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.687666 master-0 kubenswrapper[7484]: I0312 20:56:16.687580 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:16.734270 master-0 kubenswrapper[7484]: I0312 20:56:16.734187 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bq6pw"] Mar 12 20:56:16.978647 master-0 kubenswrapper[7484]: I0312 20:56:16.978567 7484 generic.go:334] "Generic (PLEG): container finished" podID="76e719af-a855-4c28-8aa7-61fcf0b2c0ee" containerID="1bb91ff9e07082f2a78acef94c3562c734ea7f52de84e87a486fe539f6964025" exitCode=0 Mar 12 20:56:16.978935 master-0 kubenswrapper[7484]: I0312 20:56:16.978638 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bq6pw" event={"ID":"76e719af-a855-4c28-8aa7-61fcf0b2c0ee","Type":"ContainerDied","Data":"1bb91ff9e07082f2a78acef94c3562c734ea7f52de84e87a486fe539f6964025"} Mar 12 20:56:16.978935 master-0 kubenswrapper[7484]: I0312 20:56:16.978712 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bq6pw" event={"ID":"76e719af-a855-4c28-8aa7-61fcf0b2c0ee","Type":"ContainerStarted","Data":"6f61ed9bded05531ae60f26eb9127e45c37195b3be203a7729b8756181ecbb77"} Mar 12 20:56:17.163844 master-0 kubenswrapper[7484]: I0312 20:56:17.163758 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mrdc"] Mar 12 20:56:17.196408 master-0 kubenswrapper[7484]: W0312 20:56:17.196344 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514012c6_628d_4cbf_8a60_be70e3913366.slice/crio-1cb65875f6868843a1a59be5b72298ba4abd3ccf4b821d698ea6d628ff3d77b0 WatchSource:0}: Error finding container 1cb65875f6868843a1a59be5b72298ba4abd3ccf4b821d698ea6d628ff3d77b0: Status 404 returned error can't find the container with id 1cb65875f6868843a1a59be5b72298ba4abd3ccf4b821d698ea6d628ff3d77b0 Mar 12 20:56:17.358936 master-0 kubenswrapper[7484]: I0312 20:56:17.358873 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:17.512273 master-0 kubenswrapper[7484]: I0312 20:56:17.512194 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mrdc"] Mar 12 20:56:17.526384 master-0 kubenswrapper[7484]: I0312 20:56:17.526319 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-utilities\") pod \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " Mar 12 20:56:17.526384 master-0 kubenswrapper[7484]: I0312 20:56:17.526384 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqp9c\" (UniqueName: \"kubernetes.io/projected/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-kube-api-access-rqp9c\") pod \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " Mar 12 20:56:17.526677 master-0 kubenswrapper[7484]: I0312 20:56:17.526424 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-catalog-content\") pod \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\" (UID: \"76e719af-a855-4c28-8aa7-61fcf0b2c0ee\") " Mar 12 20:56:17.527144 master-0 kubenswrapper[7484]: I0312 20:56:17.527095 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76e719af-a855-4c28-8aa7-61fcf0b2c0ee" (UID: "76e719af-a855-4c28-8aa7-61fcf0b2c0ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 20:56:17.528125 master-0 kubenswrapper[7484]: I0312 20:56:17.528080 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-utilities" (OuterVolumeSpecName: "utilities") pod "76e719af-a855-4c28-8aa7-61fcf0b2c0ee" (UID: "76e719af-a855-4c28-8aa7-61fcf0b2c0ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 20:56:17.532047 master-0 kubenswrapper[7484]: I0312 20:56:17.531978 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-kube-api-access-rqp9c" (OuterVolumeSpecName: "kube-api-access-rqp9c") pod "76e719af-a855-4c28-8aa7-61fcf0b2c0ee" (UID: "76e719af-a855-4c28-8aa7-61fcf0b2c0ee"). InnerVolumeSpecName "kube-api-access-rqp9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:56:17.628326 master-0 kubenswrapper[7484]: I0312 20:56:17.628261 7484 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:17.628326 master-0 kubenswrapper[7484]: I0312 20:56:17.628305 7484 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:17.628326 master-0 kubenswrapper[7484]: I0312 20:56:17.628315 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqp9c\" (UniqueName: \"kubernetes.io/projected/76e719af-a855-4c28-8aa7-61fcf0b2c0ee-kube-api-access-rqp9c\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:17.718241 master-0 kubenswrapper[7484]: I0312 20:56:17.718074 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gxjmz"] Mar 12 20:56:17.718467 master-0 kubenswrapper[7484]: E0312 20:56:17.718383 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e719af-a855-4c28-8aa7-61fcf0b2c0ee" containerName="extract-utilities" Mar 12 20:56:17.718467 master-0 kubenswrapper[7484]: I0312 20:56:17.718405 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e719af-a855-4c28-8aa7-61fcf0b2c0ee" containerName="extract-utilities" Mar 12 20:56:17.718582 master-0 kubenswrapper[7484]: I0312 20:56:17.718555 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e719af-a855-4c28-8aa7-61fcf0b2c0ee" containerName="extract-utilities" Mar 12 20:56:17.719745 master-0 kubenswrapper[7484]: I0312 20:56:17.719697 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.745629 master-0 kubenswrapper[7484]: I0312 20:56:17.745571 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gxjmz"] Mar 12 20:56:17.830762 master-0 kubenswrapper[7484]: I0312 20:56:17.830692 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-catalog-content\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.830984 master-0 kubenswrapper[7484]: I0312 20:56:17.830792 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-utilities\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.830984 master-0 kubenswrapper[7484]: I0312 20:56:17.830899 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5m2\" (UniqueName: \"kubernetes.io/projected/b7229c42-b6bc-4ea9-946c-71a4117f53e9-kube-api-access-xx5m2\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.931847 master-0 kubenswrapper[7484]: I0312 20:56:17.931727 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5m2\" (UniqueName: \"kubernetes.io/projected/b7229c42-b6bc-4ea9-946c-71a4117f53e9-kube-api-access-xx5m2\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.932102 master-0 kubenswrapper[7484]: I0312 20:56:17.931881 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-catalog-content\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.932102 master-0 kubenswrapper[7484]: I0312 20:56:17.931946 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-utilities\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.933534 master-0 kubenswrapper[7484]: I0312 20:56:17.933464 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-catalog-content\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.933660 master-0 kubenswrapper[7484]: I0312 20:56:17.933547 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-utilities\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.946794 master-0 kubenswrapper[7484]: I0312 20:56:17.946727 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5m2\" (UniqueName: \"kubernetes.io/projected/b7229c42-b6bc-4ea9-946c-71a4117f53e9-kube-api-access-xx5m2\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:17.988703 master-0 kubenswrapper[7484]: I0312 20:56:17.988554 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bq6pw" event={"ID":"76e719af-a855-4c28-8aa7-61fcf0b2c0ee","Type":"ContainerDied","Data":"6f61ed9bded05531ae60f26eb9127e45c37195b3be203a7729b8756181ecbb77"} Mar 12 20:56:17.988703 master-0 kubenswrapper[7484]: I0312 20:56:17.988576 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bq6pw" Mar 12 20:56:17.988987 master-0 kubenswrapper[7484]: I0312 20:56:17.988617 7484 scope.go:117] "RemoveContainer" containerID="1bb91ff9e07082f2a78acef94c3562c734ea7f52de84e87a486fe539f6964025" Mar 12 20:56:17.990844 master-0 kubenswrapper[7484]: I0312 20:56:17.990755 7484 generic.go:334] "Generic (PLEG): container finished" podID="514012c6-628d-4cbf-8a60-be70e3913366" containerID="b88692df70bc85f2674c6c709068ab0171854be2286e27e8e26d3c354e15b8f2" exitCode=0 Mar 12 20:56:17.990939 master-0 kubenswrapper[7484]: I0312 20:56:17.990895 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mrdc" event={"ID":"514012c6-628d-4cbf-8a60-be70e3913366","Type":"ContainerDied","Data":"b88692df70bc85f2674c6c709068ab0171854be2286e27e8e26d3c354e15b8f2"} Mar 12 20:56:17.991068 master-0 kubenswrapper[7484]: I0312 20:56:17.991010 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mrdc" event={"ID":"514012c6-628d-4cbf-8a60-be70e3913366","Type":"ContainerStarted","Data":"1cb65875f6868843a1a59be5b72298ba4abd3ccf4b821d698ea6d628ff3d77b0"} Mar 12 20:56:18.051405 master-0 kubenswrapper[7484]: I0312 20:56:18.049319 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bq6pw"] Mar 12 20:56:18.054758 master-0 kubenswrapper[7484]: I0312 20:56:18.054702 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bq6pw"] Mar 12 20:56:18.066116 master-0 kubenswrapper[7484]: I0312 20:56:18.066074 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:18.514195 master-0 kubenswrapper[7484]: I0312 20:56:18.514122 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gxjmz"] Mar 12 20:56:18.522990 master-0 kubenswrapper[7484]: W0312 20:56:18.521502 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7229c42_b6bc_4ea9_946c_71a4117f53e9.slice/crio-17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363 WatchSource:0}: Error finding container 17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363: Status 404 returned error can't find the container with id 17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363 Mar 12 20:56:19.004989 master-0 kubenswrapper[7484]: I0312 20:56:19.004635 7484 generic.go:334] "Generic (PLEG): container finished" podID="b7229c42-b6bc-4ea9-946c-71a4117f53e9" containerID="ebc67e3afd812abeee907445ae9b930d7259656ae3cc6339095705aac5cecd88" exitCode=0 Mar 12 20:56:19.004989 master-0 kubenswrapper[7484]: I0312 20:56:19.004718 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxjmz" event={"ID":"b7229c42-b6bc-4ea9-946c-71a4117f53e9","Type":"ContainerDied","Data":"ebc67e3afd812abeee907445ae9b930d7259656ae3cc6339095705aac5cecd88"} Mar 12 20:56:19.004989 master-0 kubenswrapper[7484]: I0312 20:56:19.004766 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxjmz" event={"ID":"b7229c42-b6bc-4ea9-946c-71a4117f53e9","Type":"ContainerStarted","Data":"17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363"} Mar 12 20:56:19.744471 master-0 kubenswrapper[7484]: I0312 20:56:19.744386 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e719af-a855-4c28-8aa7-61fcf0b2c0ee" path="/var/lib/kubelet/pods/76e719af-a855-4c28-8aa7-61fcf0b2c0ee/volumes" Mar 12 20:56:20.011728 master-0 kubenswrapper[7484]: I0312 20:56:20.011600 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxjmz" event={"ID":"b7229c42-b6bc-4ea9-946c-71a4117f53e9","Type":"ContainerStarted","Data":"a9372e5a66ee073d516aa24c5b57ac0c91b01b45a59c442400035352b3c5eae6"} Mar 12 20:56:20.013535 master-0 kubenswrapper[7484]: I0312 20:56:20.013486 7484 generic.go:334] "Generic (PLEG): container finished" podID="514012c6-628d-4cbf-8a60-be70e3913366" containerID="feaa0983d7a63f15ce9796e78e9f3aaf62d900bf156de26f71674172cf8eb930" exitCode=0 Mar 12 20:56:20.013648 master-0 kubenswrapper[7484]: I0312 20:56:20.013536 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mrdc" event={"ID":"514012c6-628d-4cbf-8a60-be70e3913366","Type":"ContainerDied","Data":"feaa0983d7a63f15ce9796e78e9f3aaf62d900bf156de26f71674172cf8eb930"} Mar 12 20:56:20.340612 master-0 kubenswrapper[7484]: I0312 20:56:20.340548 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:20.462829 master-0 kubenswrapper[7484]: I0312 20:56:20.462710 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-catalog-content\") pod \"514012c6-628d-4cbf-8a60-be70e3913366\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " Mar 12 20:56:20.463101 master-0 kubenswrapper[7484]: I0312 20:56:20.462841 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-utilities\") pod \"514012c6-628d-4cbf-8a60-be70e3913366\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " Mar 12 20:56:20.463101 master-0 kubenswrapper[7484]: I0312 20:56:20.462949 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swcsg\" (UniqueName: \"kubernetes.io/projected/514012c6-628d-4cbf-8a60-be70e3913366-kube-api-access-swcsg\") pod \"514012c6-628d-4cbf-8a60-be70e3913366\" (UID: \"514012c6-628d-4cbf-8a60-be70e3913366\") " Mar 12 20:56:20.465055 master-0 kubenswrapper[7484]: I0312 20:56:20.465004 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-utilities" (OuterVolumeSpecName: "utilities") pod "514012c6-628d-4cbf-8a60-be70e3913366" (UID: "514012c6-628d-4cbf-8a60-be70e3913366"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 20:56:20.477312 master-0 kubenswrapper[7484]: I0312 20:56:20.477229 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514012c6-628d-4cbf-8a60-be70e3913366-kube-api-access-swcsg" (OuterVolumeSpecName: "kube-api-access-swcsg") pod "514012c6-628d-4cbf-8a60-be70e3913366" (UID: "514012c6-628d-4cbf-8a60-be70e3913366"). InnerVolumeSpecName "kube-api-access-swcsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:56:20.565043 master-0 kubenswrapper[7484]: I0312 20:56:20.564983 7484 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-utilities\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:20.565043 master-0 kubenswrapper[7484]: I0312 20:56:20.565027 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swcsg\" (UniqueName: \"kubernetes.io/projected/514012c6-628d-4cbf-8a60-be70e3913366-kube-api-access-swcsg\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:20.625401 master-0 kubenswrapper[7484]: I0312 20:56:20.625319 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "514012c6-628d-4cbf-8a60-be70e3913366" (UID: "514012c6-628d-4cbf-8a60-be70e3913366"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 20:56:20.666861 master-0 kubenswrapper[7484]: I0312 20:56:20.666769 7484 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514012c6-628d-4cbf-8a60-be70e3913366-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:21.024991 master-0 kubenswrapper[7484]: I0312 20:56:21.024761 7484 generic.go:334] "Generic (PLEG): container finished" podID="b7229c42-b6bc-4ea9-946c-71a4117f53e9" containerID="a9372e5a66ee073d516aa24c5b57ac0c91b01b45a59c442400035352b3c5eae6" exitCode=0 Mar 12 20:56:21.024991 master-0 kubenswrapper[7484]: I0312 20:56:21.024901 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxjmz" event={"ID":"b7229c42-b6bc-4ea9-946c-71a4117f53e9","Type":"ContainerDied","Data":"a9372e5a66ee073d516aa24c5b57ac0c91b01b45a59c442400035352b3c5eae6"} Mar 12 20:56:21.029803 master-0 kubenswrapper[7484]: I0312 20:56:21.029722 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mrdc" event={"ID":"514012c6-628d-4cbf-8a60-be70e3913366","Type":"ContainerDied","Data":"1cb65875f6868843a1a59be5b72298ba4abd3ccf4b821d698ea6d628ff3d77b0"} Mar 12 20:56:21.029803 master-0 kubenswrapper[7484]: I0312 20:56:21.029790 7484 scope.go:117] "RemoveContainer" containerID="feaa0983d7a63f15ce9796e78e9f3aaf62d900bf156de26f71674172cf8eb930" Mar 12 20:56:21.030094 master-0 kubenswrapper[7484]: I0312 20:56:21.030021 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mrdc" Mar 12 20:56:21.054843 master-0 kubenswrapper[7484]: I0312 20:56:21.054749 7484 scope.go:117] "RemoveContainer" containerID="b88692df70bc85f2674c6c709068ab0171854be2286e27e8e26d3c354e15b8f2" Mar 12 20:56:21.138332 master-0 kubenswrapper[7484]: I0312 20:56:21.138182 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mrdc"] Mar 12 20:56:21.145450 master-0 kubenswrapper[7484]: I0312 20:56:21.145357 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2mrdc"] Mar 12 20:56:21.746437 master-0 kubenswrapper[7484]: I0312 20:56:21.746257 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514012c6-628d-4cbf-8a60-be70e3913366" path="/var/lib/kubelet/pods/514012c6-628d-4cbf-8a60-be70e3913366/volumes" Mar 12 20:56:22.039190 master-0 kubenswrapper[7484]: I0312 20:56:22.039004 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxjmz" event={"ID":"b7229c42-b6bc-4ea9-946c-71a4117f53e9","Type":"ContainerStarted","Data":"b858e3b6572257efd16d8e5845665000c9738c044c03dd685ac783560c1ba16f"} Mar 12 20:56:22.065707 master-0 kubenswrapper[7484]: I0312 20:56:22.065495 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gxjmz" podStartSLOduration=2.63566362 podStartE2EDuration="5.065418207s" podCreationTimestamp="2026-03-12 20:56:17 +0000 UTC" firstStartedPulling="2026-03-12 20:56:19.020274002 +0000 UTC m=+391.505542844" lastFinishedPulling="2026-03-12 20:56:21.450028549 +0000 UTC m=+393.935297431" observedRunningTime="2026-03-12 20:56:22.061595202 +0000 UTC m=+394.546864004" watchObservedRunningTime="2026-03-12 20:56:22.065418207 +0000 UTC m=+394.550687049" Mar 12 20:56:28.067238 master-0 kubenswrapper[7484]: I0312 20:56:28.067159 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:28.067238 master-0 kubenswrapper[7484]: I0312 20:56:28.067227 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:28.949092 master-0 kubenswrapper[7484]: I0312 20:56:28.948998 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl"] Mar 12 20:56:28.949433 master-0 kubenswrapper[7484]: E0312 20:56:28.949349 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514012c6-628d-4cbf-8a60-be70e3913366" containerName="extract-utilities" Mar 12 20:56:28.949433 master-0 kubenswrapper[7484]: I0312 20:56:28.949375 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="514012c6-628d-4cbf-8a60-be70e3913366" containerName="extract-utilities" Mar 12 20:56:28.949433 master-0 kubenswrapper[7484]: E0312 20:56:28.949417 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514012c6-628d-4cbf-8a60-be70e3913366" containerName="extract-content" Mar 12 20:56:28.949433 master-0 kubenswrapper[7484]: I0312 20:56:28.949430 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="514012c6-628d-4cbf-8a60-be70e3913366" containerName="extract-content" Mar 12 20:56:28.949594 master-0 kubenswrapper[7484]: I0312 20:56:28.949568 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="514012c6-628d-4cbf-8a60-be70e3913366" containerName="extract-content" Mar 12 20:56:28.950517 master-0 kubenswrapper[7484]: I0312 20:56:28.950492 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:28.957079 master-0 kubenswrapper[7484]: I0312 20:56:28.957026 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 20:56:28.957366 master-0 kubenswrapper[7484]: I0312 20:56:28.957340 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 20:56:28.957541 master-0 kubenswrapper[7484]: I0312 20:56:28.957514 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 20:56:28.957722 master-0 kubenswrapper[7484]: I0312 20:56:28.957698 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5j2qf" Mar 12 20:56:28.958426 master-0 kubenswrapper[7484]: I0312 20:56:28.958392 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 20:56:28.964625 master-0 kubenswrapper[7484]: I0312 20:56:28.964569 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 20:56:29.095520 master-0 kubenswrapper[7484]: I0312 20:56:29.095429 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-config\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.095520 master-0 kubenswrapper[7484]: I0312 20:56:29.095521 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-auth-proxy-config\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.096385 master-0 kubenswrapper[7484]: I0312 20:56:29.095636 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-machine-approver-tls\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.096385 master-0 kubenswrapper[7484]: I0312 20:56:29.095751 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldvq\" (UniqueName: \"kubernetes.io/projected/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-kube-api-access-rldvq\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.129282 master-0 kubenswrapper[7484]: I0312 20:56:29.129198 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gxjmz" podUID="b7229c42-b6bc-4ea9-946c-71a4117f53e9" containerName="registry-server" probeResult="failure" output=< Mar 12 20:56:29.129282 master-0 kubenswrapper[7484]: timeout: failed to connect service ":50051" within 1s Mar 12 20:56:29.129282 master-0 kubenswrapper[7484]: > Mar 12 20:56:29.196677 master-0 kubenswrapper[7484]: I0312 20:56:29.196575 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-config\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.196677 master-0 kubenswrapper[7484]: I0312 20:56:29.196665 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-auth-proxy-config\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.196677 master-0 kubenswrapper[7484]: I0312 20:56:29.196690 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-machine-approver-tls\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.197126 master-0 kubenswrapper[7484]: I0312 20:56:29.196986 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rldvq\" (UniqueName: \"kubernetes.io/projected/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-kube-api-access-rldvq\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.197861 master-0 kubenswrapper[7484]: I0312 20:56:29.197767 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-config\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.198432 master-0 kubenswrapper[7484]: I0312 20:56:29.198395 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-auth-proxy-config\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.202493 master-0 kubenswrapper[7484]: I0312 20:56:29.202419 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-machine-approver-tls\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.219918 master-0 kubenswrapper[7484]: I0312 20:56:29.219869 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rldvq\" (UniqueName: \"kubernetes.io/projected/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-kube-api-access-rldvq\") pod \"machine-approver-955fcfb87-57dhl\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:29.270431 master-0 kubenswrapper[7484]: I0312 20:56:29.270377 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:30.102648 master-0 kubenswrapper[7484]: I0312 20:56:30.102583 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" event={"ID":"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500","Type":"ContainerStarted","Data":"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f"} Mar 12 20:56:30.102648 master-0 kubenswrapper[7484]: I0312 20:56:30.102639 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" event={"ID":"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500","Type":"ContainerStarted","Data":"c5cc276a7bfe32028ff8bc4b02aec1db55a15e86468a746b888701a3caedbd11"} Mar 12 20:56:30.840659 master-0 kubenswrapper[7484]: I0312 20:56:30.840200 7484 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 20:56:30.840659 master-0 kubenswrapper[7484]: I0312 20:56:30.840550 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14" gracePeriod=30 Mar 12 20:56:30.841086 master-0 kubenswrapper[7484]: I0312 20:56:30.840788 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443" gracePeriod=30 Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.842495 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: E0312 20:56:30.842883 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.842917 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: E0312 20:56:30.842940 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.842959 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: E0312 20:56:30.842978 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.842997 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: E0312 20:56:30.843019 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843037 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: E0312 20:56:30.843066 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843084 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843323 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843347 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843366 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843408 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843427 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: E0312 20:56:30.843621 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.843685 master-0 kubenswrapper[7484]: I0312 20:56:30.843644 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.845095 master-0 kubenswrapper[7484]: I0312 20:56:30.843908 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.845095 master-0 kubenswrapper[7484]: I0312 20:56:30.843934 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.845095 master-0 kubenswrapper[7484]: E0312 20:56:30.844134 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.845095 master-0 kubenswrapper[7484]: I0312 20:56:30.844412 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 12 20:56:30.846053 master-0 kubenswrapper[7484]: I0312 20:56:30.845766 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:30.934309 master-0 kubenswrapper[7484]: I0312 20:56:30.934226 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:30.934584 master-0 kubenswrapper[7484]: I0312 20:56:30.934386 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:31.035947 master-0 kubenswrapper[7484]: I0312 20:56:31.035877 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:31.036162 master-0 kubenswrapper[7484]: I0312 20:56:31.035999 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:31.036162 master-0 kubenswrapper[7484]: I0312 20:56:31.036039 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:31.036292 master-0 kubenswrapper[7484]: I0312 20:56:31.036207 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:31.115736 master-0 kubenswrapper[7484]: I0312 20:56:31.115563 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14" exitCode=0 Mar 12 20:56:31.115736 master-0 kubenswrapper[7484]: I0312 20:56:31.115690 7484 scope.go:117] "RemoveContainer" containerID="1b0c3f4b3caa0d5feb808a3612fec0d5e14e38edd6b5d67620e75cb7f7990bd6" Mar 12 20:56:31.355559 master-0 kubenswrapper[7484]: I0312 20:56:31.355496 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:31.363365 master-0 kubenswrapper[7484]: I0312 20:56:31.363289 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 20:56:31.927056 master-0 kubenswrapper[7484]: W0312 20:56:31.926933 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d54a9c5cfaefbffe1b215272d01bc0c.slice/crio-f365a407143b07d7ab3bf3145491c06b19450d422583608ac9a40200009f40fa WatchSource:0}: Error finding container f365a407143b07d7ab3bf3145491c06b19450d422583608ac9a40200009f40fa: Status 404 returned error can't find the container with id f365a407143b07d7ab3bf3145491c06b19450d422583608ac9a40200009f40fa Mar 12 20:56:31.998732 master-0 kubenswrapper[7484]: I0312 20:56:31.998663 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:56:32.124908 master-0 kubenswrapper[7484]: I0312 20:56:32.124794 7484 generic.go:334] "Generic (PLEG): container finished" podID="367123ca-5a21-415c-8ac2-6d875696536b" containerID="73ffa716ed0ceb1f05c1ae94138aa9510898a766a0ea47f5fb2644e437ab8da6" exitCode=0 Mar 12 20:56:32.124908 master-0 kubenswrapper[7484]: I0312 20:56:32.124856 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"367123ca-5a21-415c-8ac2-6d875696536b","Type":"ContainerDied","Data":"73ffa716ed0ceb1f05c1ae94138aa9510898a766a0ea47f5fb2644e437ab8da6"} Mar 12 20:56:32.127117 master-0 kubenswrapper[7484]: I0312 20:56:32.126270 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7d54a9c5cfaefbffe1b215272d01bc0c","Type":"ContainerStarted","Data":"f365a407143b07d7ab3bf3145491c06b19450d422583608ac9a40200009f40fa"} Mar 12 20:56:32.129975 master-0 kubenswrapper[7484]: I0312 20:56:32.129579 7484 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443" exitCode=0 Mar 12 20:56:32.129975 master-0 kubenswrapper[7484]: I0312 20:56:32.129647 7484 scope.go:117] "RemoveContainer" containerID="473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443" Mar 12 20:56:32.129975 master-0 kubenswrapper[7484]: I0312 20:56:32.129766 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155702 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155753 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155789 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155843 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155910 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155925 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155972 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.155994 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.156013 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.156118 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.156142 7484 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.156158 7484 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.156167 7484 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:32.156206 master-0 kubenswrapper[7484]: I0312 20:56:32.156179 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:32.157604 master-0 kubenswrapper[7484]: I0312 20:56:32.157569 7484 scope.go:117] "RemoveContainer" containerID="c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14" Mar 12 20:56:32.175126 master-0 kubenswrapper[7484]: I0312 20:56:32.174996 7484 scope.go:117] "RemoveContainer" containerID="84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c" Mar 12 20:56:32.190151 master-0 kubenswrapper[7484]: I0312 20:56:32.190104 7484 scope.go:117] "RemoveContainer" containerID="473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: E0312 20:56:32.190862 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443\": container with ID starting with 473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443 not found: ID does not exist" containerID="473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: I0312 20:56:32.190906 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443"} err="failed to get container status \"473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443\": rpc error: code = NotFound desc = could not find container \"473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443\": container with ID starting with 473010500c0fd5755ad97dc462629b8580e55a87fa11be411bf25911be941443 not found: ID does not exist" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: I0312 20:56:32.190939 7484 scope.go:117] "RemoveContainer" containerID="c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: E0312 20:56:32.195764 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14\": container with ID starting with c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14 not found: ID does not exist" containerID="c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: I0312 20:56:32.195822 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14"} err="failed to get container status \"c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14\": rpc error: code = NotFound desc = could not find container \"c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14\": container with ID starting with c6140b342e454560e27bc37359b130097e81f913d9eb4fdb50381c726897af14 not found: ID does not exist" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: I0312 20:56:32.195854 7484 scope.go:117] "RemoveContainer" containerID="84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: E0312 20:56:32.196234 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c\": container with ID starting with 84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c not found: ID does not exist" containerID="84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c" Mar 12 20:56:32.197951 master-0 kubenswrapper[7484]: I0312 20:56:32.196259 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c"} err="failed to get container status \"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c\": rpc error: code = NotFound desc = could not find container \"84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c\": container with ID starting with 84660712d67f9b33ee49163616de93d3b9e986937307a8e3781dd5d5f489844c not found: ID does not exist" Mar 12 20:56:32.257623 master-0 kubenswrapper[7484]: I0312 20:56:32.257522 7484 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:33.142549 master-0 kubenswrapper[7484]: I0312 20:56:33.142486 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7d54a9c5cfaefbffe1b215272d01bc0c","Type":"ContainerStarted","Data":"3903035b9e73b841d666d6fc139bd62b961c60d2e83441c115f7bd868868c079"} Mar 12 20:56:33.143302 master-0 kubenswrapper[7484]: I0312 20:56:33.143258 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7d54a9c5cfaefbffe1b215272d01bc0c","Type":"ContainerStarted","Data":"7f2dec97dd1ce529f99f40df66e2e92b6d6da2e679bbce21a7eba2d896a0203a"} Mar 12 20:56:33.143425 master-0 kubenswrapper[7484]: I0312 20:56:33.143408 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7d54a9c5cfaefbffe1b215272d01bc0c","Type":"ContainerStarted","Data":"41b66431878d44ab858bd298f2664ca1044c24d2683709493ac4eda068452880"} Mar 12 20:56:33.143548 master-0 kubenswrapper[7484]: I0312 20:56:33.143532 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7d54a9c5cfaefbffe1b215272d01bc0c","Type":"ContainerStarted","Data":"4f6de2cd5a1fff08ef55af61c8bc016882b96a14bcce20fcbe68fbc0199f304d"} Mar 12 20:56:33.144830 master-0 kubenswrapper[7484]: I0312 20:56:33.144733 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" event={"ID":"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500","Type":"ContainerStarted","Data":"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500"} Mar 12 20:56:33.465252 master-0 kubenswrapper[7484]: I0312 20:56:33.465211 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:56:33.474383 master-0 kubenswrapper[7484]: I0312 20:56:33.474365 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-kubelet-dir\") pod \"367123ca-5a21-415c-8ac2-6d875696536b\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " Mar 12 20:56:33.474500 master-0 kubenswrapper[7484]: I0312 20:56:33.474487 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") pod \"367123ca-5a21-415c-8ac2-6d875696536b\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " Mar 12 20:56:33.474578 master-0 kubenswrapper[7484]: I0312 20:56:33.474567 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-var-lock\") pod \"367123ca-5a21-415c-8ac2-6d875696536b\" (UID: \"367123ca-5a21-415c-8ac2-6d875696536b\") " Mar 12 20:56:33.474802 master-0 kubenswrapper[7484]: I0312 20:56:33.474788 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-var-lock" (OuterVolumeSpecName: "var-lock") pod "367123ca-5a21-415c-8ac2-6d875696536b" (UID: "367123ca-5a21-415c-8ac2-6d875696536b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:33.474894 master-0 kubenswrapper[7484]: I0312 20:56:33.474881 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "367123ca-5a21-415c-8ac2-6d875696536b" (UID: "367123ca-5a21-415c-8ac2-6d875696536b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:56:33.478192 master-0 kubenswrapper[7484]: I0312 20:56:33.478120 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "367123ca-5a21-415c-8ac2-6d875696536b" (UID: "367123ca-5a21-415c-8ac2-6d875696536b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:56:33.576053 master-0 kubenswrapper[7484]: I0312 20:56:33.575954 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:33.576053 master-0 kubenswrapper[7484]: I0312 20:56:33.576024 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/367123ca-5a21-415c-8ac2-6d875696536b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:33.576053 master-0 kubenswrapper[7484]: I0312 20:56:33.576041 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/367123ca-5a21-415c-8ac2-6d875696536b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:33.670582 master-0 kubenswrapper[7484]: I0312 20:56:33.670299 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" podStartSLOduration=3.289617172 podStartE2EDuration="5.670276269s" podCreationTimestamp="2026-03-12 20:56:28 +0000 UTC" firstStartedPulling="2026-03-12 20:56:29.569362682 +0000 UTC m=+402.054631484" lastFinishedPulling="2026-03-12 20:56:31.950021779 +0000 UTC m=+404.435290581" observedRunningTime="2026-03-12 20:56:33.667649734 +0000 UTC m=+406.152918576" watchObservedRunningTime="2026-03-12 20:56:33.670276269 +0000 UTC m=+406.155545101" Mar 12 20:56:33.745756 master-0 kubenswrapper[7484]: I0312 20:56:33.745666 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 12 20:56:33.746299 master-0 kubenswrapper[7484]: I0312 20:56:33.746273 7484 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 12 20:56:34.165388 master-0 kubenswrapper[7484]: I0312 20:56:34.165329 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 20:56:34.798194 master-0 kubenswrapper[7484]: E0312 20:56:34.798062 7484 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.065s" Mar 12 20:56:34.798194 master-0 kubenswrapper[7484]: I0312 20:56:34.798173 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"367123ca-5a21-415c-8ac2-6d875696536b","Type":"ContainerDied","Data":"37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07"} Mar 12 20:56:34.798630 master-0 kubenswrapper[7484]: I0312 20:56:34.798259 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07" Mar 12 20:56:34.798630 master-0 kubenswrapper[7484]: I0312 20:56:34.798355 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 20:56:34.798630 master-0 kubenswrapper[7484]: I0312 20:56:34.798380 7484 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="7bd386d1-6e87-42c3-8451-c624c55e3e2a" Mar 12 20:56:34.808397 master-0 kubenswrapper[7484]: I0312 20:56:34.808328 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 12 20:56:34.808725 master-0 kubenswrapper[7484]: I0312 20:56:34.808686 7484 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="7bd386d1-6e87-42c3-8451-c624c55e3e2a" Mar 12 20:56:34.852237 master-0 kubenswrapper[7484]: I0312 20:56:34.852122 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.852088822 podStartE2EDuration="3.852088822s" podCreationTimestamp="2026-03-12 20:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:56:34.844653939 +0000 UTC m=+407.329922821" watchObservedRunningTime="2026-03-12 20:56:34.852088822 +0000 UTC m=+407.337357664" Mar 12 20:56:38.120725 master-0 kubenswrapper[7484]: I0312 20:56:38.120626 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:38.166046 master-0 kubenswrapper[7484]: I0312 20:56:38.165972 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 20:56:41.356467 master-0 kubenswrapper[7484]: I0312 20:56:41.356329 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:41.357425 master-0 kubenswrapper[7484]: I0312 20:56:41.357137 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:41.357425 master-0 kubenswrapper[7484]: I0312 20:56:41.357221 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:41.357425 master-0 kubenswrapper[7484]: I0312 20:56:41.357377 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:41.363694 master-0 kubenswrapper[7484]: I0312 20:56:41.363636 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:41.365508 master-0 kubenswrapper[7484]: I0312 20:56:41.365443 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:42.235367 master-0 kubenswrapper[7484]: I0312 20:56:42.235268 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:42.235703 master-0 kubenswrapper[7484]: I0312 20:56:42.235407 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 20:56:53.279441 master-0 kubenswrapper[7484]: I0312 20:56:53.279374 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9"] Mar 12 20:56:53.280287 master-0 kubenswrapper[7484]: E0312 20:56:53.279614 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367123ca-5a21-415c-8ac2-6d875696536b" containerName="installer" Mar 12 20:56:53.280287 master-0 kubenswrapper[7484]: I0312 20:56:53.279625 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="367123ca-5a21-415c-8ac2-6d875696536b" containerName="installer" Mar 12 20:56:53.280287 master-0 kubenswrapper[7484]: I0312 20:56:53.279725 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="367123ca-5a21-415c-8ac2-6d875696536b" containerName="installer" Mar 12 20:56:53.280287 master-0 kubenswrapper[7484]: I0312 20:56:53.280267 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.284862 master-0 kubenswrapper[7484]: I0312 20:56:53.283948 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 20:56:53.284862 master-0 kubenswrapper[7484]: I0312 20:56:53.284290 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-7t6bk" Mar 12 20:56:53.284862 master-0 kubenswrapper[7484]: I0312 20:56:53.284332 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 20:56:53.284862 master-0 kubenswrapper[7484]: I0312 20:56:53.284464 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 20:56:53.297620 master-0 kubenswrapper[7484]: I0312 20:56:53.297554 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9"] Mar 12 20:56:53.301056 master-0 kubenswrapper[7484]: I0312 20:56:53.300953 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl"] Mar 12 20:56:53.301408 master-0 kubenswrapper[7484]: I0312 20:56:53.301324 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="kube-rbac-proxy" containerID="cri-o://10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f" gracePeriod=30 Mar 12 20:56:53.302535 master-0 kubenswrapper[7484]: I0312 20:56:53.301545 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="machine-approver-controller" containerID="cri-o://cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500" gracePeriod=30 Mar 12 20:56:53.352324 master-0 kubenswrapper[7484]: I0312 20:56:53.352192 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbnbs\" (UniqueName: \"kubernetes.io/projected/32050f14-1939-41bf-a824-22016b90c189-kube-api-access-pbnbs\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.352324 master-0 kubenswrapper[7484]: I0312 20:56:53.352277 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.381928 master-0 kubenswrapper[7484]: I0312 20:56:53.379903 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-lc7jk"] Mar 12 20:56:53.381928 master-0 kubenswrapper[7484]: I0312 20:56:53.380511 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.389181 master-0 kubenswrapper[7484]: I0312 20:56:53.389147 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 20:56:53.389452 master-0 kubenswrapper[7484]: I0312 20:56:53.389406 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 20:56:53.389648 master-0 kubenswrapper[7484]: I0312 20:56:53.389624 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 20:56:53.389727 master-0 kubenswrapper[7484]: I0312 20:56:53.389715 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-n68ff" Mar 12 20:56:53.389939 master-0 kubenswrapper[7484]: I0312 20:56:53.389912 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 20:56:53.390022 master-0 kubenswrapper[7484]: I0312 20:56:53.390011 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 20:56:53.392857 master-0 kubenswrapper[7484]: I0312 20:56:53.391949 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht"] Mar 12 20:56:53.392857 master-0 kubenswrapper[7484]: I0312 20:56:53.392747 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.399857 master-0 kubenswrapper[7484]: I0312 20:56:53.399010 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs"] Mar 12 20:56:53.402827 master-0 kubenswrapper[7484]: I0312 20:56:53.400143 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.402827 master-0 kubenswrapper[7484]: I0312 20:56:53.401774 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 20:56:53.402827 master-0 kubenswrapper[7484]: I0312 20:56:53.402077 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-bk87n" Mar 12 20:56:53.402827 master-0 kubenswrapper[7484]: I0312 20:56:53.402667 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 20:56:53.402827 master-0 kubenswrapper[7484]: I0312 20:56:53.402710 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 20:56:53.403052 master-0 kubenswrapper[7484]: I0312 20:56:53.403003 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 20:56:53.405346 master-0 kubenswrapper[7484]: I0312 20:56:53.405308 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb"] Mar 12 20:56:53.405765 master-0 kubenswrapper[7484]: I0312 20:56:53.405733 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-xjkth" Mar 12 20:56:53.406879 master-0 kubenswrapper[7484]: I0312 20:56:53.406860 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.410839 master-0 kubenswrapper[7484]: I0312 20:56:53.408842 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 20:56:53.419501 master-0 kubenswrapper[7484]: I0312 20:56:53.419334 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-lc7jk"] Mar 12 20:56:53.420465 master-0 kubenswrapper[7484]: I0312 20:56:53.420325 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 20:56:53.420696 master-0 kubenswrapper[7484]: I0312 20:56:53.420675 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 20:56:53.420861 master-0 kubenswrapper[7484]: I0312 20:56:53.420832 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 20:56:53.421011 master-0 kubenswrapper[7484]: I0312 20:56:53.420986 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-r4pnh" Mar 12 20:56:53.421126 master-0 kubenswrapper[7484]: I0312 20:56:53.421104 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 20:56:53.421432 master-0 kubenswrapper[7484]: I0312 20:56:53.421398 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 20:56:53.421605 master-0 kubenswrapper[7484]: I0312 20:56:53.421577 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht"] Mar 12 20:56:53.445402 master-0 kubenswrapper[7484]: I0312 20:56:53.445374 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs"] Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453113 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbnbs\" (UniqueName: \"kubernetes.io/projected/32050f14-1939-41bf-a824-22016b90c189-kube-api-access-pbnbs\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453169 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc757324-bbc7-480c-8f16-eb454cfce5b7-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453193 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453218 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc757324-bbc7-480c-8f16-eb454cfce5b7-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453240 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453256 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453272 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-snapshots\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453300 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8745n\" (UniqueName: \"kubernetes.io/projected/7f3afe47-c537-420c-b5be-1cad612e119d-kube-api-access-8745n\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453319 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8xl\" (UniqueName: \"kubernetes.io/projected/dc757324-bbc7-480c-8f16-eb454cfce5b7-kube-api-access-mr8xl\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453339 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-service-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453358 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n555w\" (UniqueName: \"kubernetes.io/projected/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-kube-api-access-n555w\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453376 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453391 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.455260 master-0 kubenswrapper[7484]: I0312 20:56:53.453406 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.471136 master-0 kubenswrapper[7484]: I0312 20:56:53.466387 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.497106 master-0 kubenswrapper[7484]: I0312 20:56:53.497043 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8"] Mar 12 20:56:53.497973 master-0 kubenswrapper[7484]: I0312 20:56:53.497947 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.508140 master-0 kubenswrapper[7484]: I0312 20:56:53.507674 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-62zgv" Mar 12 20:56:53.508410 master-0 kubenswrapper[7484]: I0312 20:56:53.508184 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 20:56:53.518016 master-0 kubenswrapper[7484]: I0312 20:56:53.509569 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbnbs\" (UniqueName: \"kubernetes.io/projected/32050f14-1939-41bf-a824-22016b90c189-kube-api-access-pbnbs\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.518016 master-0 kubenswrapper[7484]: I0312 20:56:53.511112 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 20:56:53.518016 master-0 kubenswrapper[7484]: I0312 20:56:53.516613 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 20:56:53.519621 master-0 kubenswrapper[7484]: I0312 20:56:53.519563 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s"] Mar 12 20:56:53.532308 master-0 kubenswrapper[7484]: I0312 20:56:53.526558 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 20:56:53.549651 master-0 kubenswrapper[7484]: I0312 20:56:53.549563 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 20:56:53.551249 master-0 kubenswrapper[7484]: I0312 20:56:53.551214 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.556588 master-0 kubenswrapper[7484]: I0312 20:56:53.556535 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-9n54f" Mar 12 20:56:53.557179 master-0 kubenswrapper[7484]: I0312 20:56:53.557148 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpf99\" (UniqueName: \"kubernetes.io/projected/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-kube-api-access-tpf99\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.557239 master-0 kubenswrapper[7484]: I0312 20:56:53.557216 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc757324-bbc7-480c-8f16-eb454cfce5b7-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.557271 master-0 kubenswrapper[7484]: I0312 20:56:53.557243 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.557399 master-0 kubenswrapper[7484]: I0312 20:56:53.557377 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 20:56:53.557878 master-0 kubenswrapper[7484]: I0312 20:56:53.557862 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 20:56:53.561016 master-0 kubenswrapper[7484]: I0312 20:56:53.559501 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.566885 master-0 kubenswrapper[7484]: I0312 20:56:53.563884 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq"] Mar 12 20:56:53.566885 master-0 kubenswrapper[7484]: I0312 20:56:53.566393 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc"] Mar 12 20:56:53.567059 master-0 kubenswrapper[7484]: I0312 20:56:53.557275 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.567155 master-0 kubenswrapper[7484]: I0312 20:56:53.567124 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc757324-bbc7-480c-8f16-eb454cfce5b7-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.567215 master-0 kubenswrapper[7484]: I0312 20:56:53.567202 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.567274 master-0 kubenswrapper[7484]: I0312 20:56:53.567233 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.567319 master-0 kubenswrapper[7484]: I0312 20:56:53.567289 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-snapshots\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.567396 master-0 kubenswrapper[7484]: I0312 20:56:53.567364 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.567450 master-0 kubenswrapper[7484]: I0312 20:56:53.567407 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8745n\" (UniqueName: \"kubernetes.io/projected/7f3afe47-c537-420c-b5be-1cad612e119d-kube-api-access-8745n\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.567487 master-0 kubenswrapper[7484]: I0312 20:56:53.567453 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.567487 master-0 kubenswrapper[7484]: I0312 20:56:53.567477 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jzt\" (UniqueName: \"kubernetes.io/projected/508cb83e-6f25-4235-8c56-b25b762ebcad-kube-api-access-s4jzt\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.567544 master-0 kubenswrapper[7484]: I0312 20:56:53.567514 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.567544 master-0 kubenswrapper[7484]: I0312 20:56:53.567534 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr8xl\" (UniqueName: \"kubernetes.io/projected/dc757324-bbc7-480c-8f16-eb454cfce5b7-kube-api-access-mr8xl\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.567603 master-0 kubenswrapper[7484]: I0312 20:56:53.567585 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-service-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.567633 master-0 kubenswrapper[7484]: I0312 20:56:53.567612 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxkr\" (UniqueName: \"kubernetes.io/projected/05fd1378-3935-4caf-96c5-17cf7e29417f-kube-api-access-8xxkr\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.567668 master-0 kubenswrapper[7484]: I0312 20:56:53.567630 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.567764 master-0 kubenswrapper[7484]: I0312 20:56:53.567711 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.567858 master-0 kubenswrapper[7484]: I0312 20:56:53.567789 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n555w\" (UniqueName: \"kubernetes.io/projected/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-kube-api-access-n555w\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.567858 master-0 kubenswrapper[7484]: I0312 20:56:53.567847 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.567950 master-0 kubenswrapper[7484]: I0312 20:56:53.567881 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.567950 master-0 kubenswrapper[7484]: I0312 20:56:53.567913 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.567950 master-0 kubenswrapper[7484]: I0312 20:56:53.567935 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.568100 master-0 kubenswrapper[7484]: I0312 20:56:53.568075 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.567888 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.569744 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8"] Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.569909 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.569930 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-service-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.570697 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc757324-bbc7-480c-8f16-eb454cfce5b7-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.570733 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s"] Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.570888 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.575221 master-0 kubenswrapper[7484]: I0312 20:56:53.571367 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-snapshots\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.575576 master-0 kubenswrapper[7484]: I0312 20:56:53.575521 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 20:56:53.575576 master-0 kubenswrapper[7484]: I0312 20:56:53.575526 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.575680 master-0 kubenswrapper[7484]: I0312 20:56:53.575649 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 20:56:53.575721 master-0 kubenswrapper[7484]: I0312 20:56:53.575660 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-7875j" Mar 12 20:56:53.575904 master-0 kubenswrapper[7484]: I0312 20:56:53.575864 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 20:56:53.575953 master-0 kubenswrapper[7484]: I0312 20:56:53.575911 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 20:56:53.576152 master-0 kubenswrapper[7484]: I0312 20:56:53.576124 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 20:56:53.576263 master-0 kubenswrapper[7484]: I0312 20:56:53.576241 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 20:56:53.576452 master-0 kubenswrapper[7484]: I0312 20:56:53.576425 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc757324-bbc7-480c-8f16-eb454cfce5b7-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.578804 master-0 kubenswrapper[7484]: I0312 20:56:53.576844 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-bxh97" Mar 12 20:56:53.578804 master-0 kubenswrapper[7484]: I0312 20:56:53.576439 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.596579 master-0 kubenswrapper[7484]: I0312 20:56:53.590515 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc"] Mar 12 20:56:53.611249 master-0 kubenswrapper[7484]: I0312 20:56:53.611191 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8745n\" (UniqueName: \"kubernetes.io/projected/7f3afe47-c537-420c-b5be-1cad612e119d-kube-api-access-8745n\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.611457 master-0 kubenswrapper[7484]: I0312 20:56:53.611285 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq"] Mar 12 20:56:53.611457 master-0 kubenswrapper[7484]: I0312 20:56:53.611419 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr8xl\" (UniqueName: \"kubernetes.io/projected/dc757324-bbc7-480c-8f16-eb454cfce5b7-kube-api-access-mr8xl\") pod \"cluster-cloud-controller-manager-operator-559568b945-ml8vb\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.615917 master-0 kubenswrapper[7484]: I0312 20:56:53.614925 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.618996 master-0 kubenswrapper[7484]: I0312 20:56:53.618935 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n555w\" (UniqueName: \"kubernetes.io/projected/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-kube-api-access-n555w\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.650165 master-0 kubenswrapper[7484]: I0312 20:56:53.649597 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 20:56:53.668741 master-0 kubenswrapper[7484]: I0312 20:56:53.668659 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.673781 master-0 kubenswrapper[7484]: I0312 20:56:53.673698 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.673967 master-0 kubenswrapper[7484]: I0312 20:56:53.673895 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.674036 master-0 kubenswrapper[7484]: I0312 20:56:53.673988 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.674343 master-0 kubenswrapper[7484]: I0312 20:56:53.674312 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.674406 master-0 kubenswrapper[7484]: I0312 20:56:53.674385 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.674453 master-0 kubenswrapper[7484]: I0312 20:56:53.674410 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrzs\" (UniqueName: \"kubernetes.io/projected/b71376ea-e248-48fc-b2c4-1de7236ddd31-kube-api-access-nlrzs\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.674493 master-0 kubenswrapper[7484]: I0312 20:56:53.674462 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.674567 master-0 kubenswrapper[7484]: I0312 20:56:53.674540 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.674621 master-0 kubenswrapper[7484]: I0312 20:56:53.674575 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jzt\" (UniqueName: \"kubernetes.io/projected/508cb83e-6f25-4235-8c56-b25b762ebcad-kube-api-access-s4jzt\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.675399 master-0 kubenswrapper[7484]: I0312 20:56:53.675358 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.676389 master-0 kubenswrapper[7484]: I0312 20:56:53.674599 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.676921 master-0 kubenswrapper[7484]: I0312 20:56:53.676477 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xxkr\" (UniqueName: \"kubernetes.io/projected/05fd1378-3935-4caf-96c5-17cf7e29417f-kube-api-access-8xxkr\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.676921 master-0 kubenswrapper[7484]: I0312 20:56:53.676530 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.676921 master-0 kubenswrapper[7484]: I0312 20:56:53.676618 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.677212 master-0 kubenswrapper[7484]: I0312 20:56:53.677182 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:56:53.678608 master-0 kubenswrapper[7484]: I0312 20:56:53.678445 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.678990 master-0 kubenswrapper[7484]: I0312 20:56:53.678960 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.679341 master-0 kubenswrapper[7484]: I0312 20:56:53.679314 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.679412 master-0 kubenswrapper[7484]: I0312 20:56:53.679385 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.679758 master-0 kubenswrapper[7484]: I0312 20:56:53.679732 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.679850 master-0 kubenswrapper[7484]: I0312 20:56:53.679761 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.679850 master-0 kubenswrapper[7484]: I0312 20:56:53.679797 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrm2z\" (UniqueName: \"kubernetes.io/projected/17d2bb40-74e2-4894-a884-7018952bdf71-kube-api-access-lrm2z\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.679850 master-0 kubenswrapper[7484]: I0312 20:56:53.679835 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpf99\" (UniqueName: \"kubernetes.io/projected/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-kube-api-access-tpf99\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.681294 master-0 kubenswrapper[7484]: I0312 20:56:53.680491 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.681294 master-0 kubenswrapper[7484]: I0312 20:56:53.681150 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.681294 master-0 kubenswrapper[7484]: I0312 20:56:53.681163 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.690013 master-0 kubenswrapper[7484]: I0312 20:56:53.689842 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.697628 master-0 kubenswrapper[7484]: I0312 20:56:53.697596 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jzt\" (UniqueName: \"kubernetes.io/projected/508cb83e-6f25-4235-8c56-b25b762ebcad-kube-api-access-s4jzt\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.699630 master-0 kubenswrapper[7484]: I0312 20:56:53.699395 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:53.700338 master-0 kubenswrapper[7484]: I0312 20:56:53.700219 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpf99\" (UniqueName: \"kubernetes.io/projected/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-kube-api-access-tpf99\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.710293 master-0 kubenswrapper[7484]: I0312 20:56:53.710248 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xxkr\" (UniqueName: \"kubernetes.io/projected/05fd1378-3935-4caf-96c5-17cf7e29417f-kube-api-access-8xxkr\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:53.713823 master-0 kubenswrapper[7484]: I0312 20:56:53.713732 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 20:56:53.726251 master-0 kubenswrapper[7484]: I0312 20:56:53.725780 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 20:56:53.738940 master-0 kubenswrapper[7484]: I0312 20:56:53.738665 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 20:56:53.781717 master-0 kubenswrapper[7484]: I0312 20:56:53.781620 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-config\") pod \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " Mar 12 20:56:53.783573 master-0 kubenswrapper[7484]: I0312 20:56:53.781960 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-auth-proxy-config\") pod \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " Mar 12 20:56:53.784048 master-0 kubenswrapper[7484]: I0312 20:56:53.782837 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" (UID: "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:56:53.784205 master-0 kubenswrapper[7484]: I0312 20:56:53.783274 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-config" (OuterVolumeSpecName: "config") pod "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" (UID: "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:56:53.784566 master-0 kubenswrapper[7484]: I0312 20:56:53.783955 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-machine-approver-tls\") pod \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " Mar 12 20:56:53.784733 master-0 kubenswrapper[7484]: I0312 20:56:53.784547 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rldvq\" (UniqueName: \"kubernetes.io/projected/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-kube-api-access-rldvq\") pod \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\" (UID: \"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500\") " Mar 12 20:56:53.786299 master-0 kubenswrapper[7484]: I0312 20:56:53.785983 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.786546 master-0 kubenswrapper[7484]: I0312 20:56:53.786396 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.786546 master-0 kubenswrapper[7484]: I0312 20:56:53.786428 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.788076 master-0 kubenswrapper[7484]: I0312 20:56:53.786705 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.788076 master-0 kubenswrapper[7484]: I0312 20:56:53.786756 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrm2z\" (UniqueName: \"kubernetes.io/projected/17d2bb40-74e2-4894-a884-7018952bdf71-kube-api-access-lrm2z\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.788076 master-0 kubenswrapper[7484]: I0312 20:56:53.787863 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.789772 master-0 kubenswrapper[7484]: I0312 20:56:53.788274 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.789772 master-0 kubenswrapper[7484]: I0312 20:56:53.789112 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.789772 master-0 kubenswrapper[7484]: I0312 20:56:53.789734 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-kube-api-access-rldvq" (OuterVolumeSpecName: "kube-api-access-rldvq") pod "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" (UID: "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500"). InnerVolumeSpecName "kube-api-access-rldvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:56:53.791851 master-0 kubenswrapper[7484]: I0312 20:56:53.791322 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.791851 master-0 kubenswrapper[7484]: I0312 20:56:53.791395 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrzs\" (UniqueName: \"kubernetes.io/projected/b71376ea-e248-48fc-b2c4-1de7236ddd31-kube-api-access-nlrzs\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.791851 master-0 kubenswrapper[7484]: I0312 20:56:53.791479 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:53.791851 master-0 kubenswrapper[7484]: I0312 20:56:53.791492 7484 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:53.791851 master-0 kubenswrapper[7484]: I0312 20:56:53.791508 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rldvq\" (UniqueName: \"kubernetes.io/projected/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-kube-api-access-rldvq\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:53.793323 master-0 kubenswrapper[7484]: I0312 20:56:53.793288 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" (UID: "2ce9bbb5-37b5-4b43-aeb6-904bd0d86500"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:56:53.793467 master-0 kubenswrapper[7484]: I0312 20:56:53.793428 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.793728 master-0 kubenswrapper[7484]: I0312 20:56:53.793699 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.793962 master-0 kubenswrapper[7484]: I0312 20:56:53.793915 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.796247 master-0 kubenswrapper[7484]: I0312 20:56:53.796146 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.805035 master-0 kubenswrapper[7484]: I0312 20:56:53.804782 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrm2z\" (UniqueName: \"kubernetes.io/projected/17d2bb40-74e2-4894-a884-7018952bdf71-kube-api-access-lrm2z\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:53.814439 master-0 kubenswrapper[7484]: I0312 20:56:53.814391 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrzs\" (UniqueName: \"kubernetes.io/projected/b71376ea-e248-48fc-b2c4-1de7236ddd31-kube-api-access-nlrzs\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:53.897052 master-0 kubenswrapper[7484]: I0312 20:56:53.894956 7484 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 20:56:53.905910 master-0 kubenswrapper[7484]: I0312 20:56:53.897732 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 20:56:53.926561 master-0 kubenswrapper[7484]: I0312 20:56:53.925984 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 20:56:54.060318 master-0 kubenswrapper[7484]: I0312 20:56:54.050573 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 20:56:54.077992 master-0 kubenswrapper[7484]: I0312 20:56:54.074660 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 20:56:54.180333 master-0 kubenswrapper[7484]: I0312 20:56:54.179570 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs"] Mar 12 20:56:54.201911 master-0 kubenswrapper[7484]: I0312 20:56:54.201866 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9"] Mar 12 20:56:54.221369 master-0 kubenswrapper[7484]: I0312 20:56:54.221316 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8"] Mar 12 20:56:54.285863 master-0 kubenswrapper[7484]: I0312 20:56:54.285784 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s"] Mar 12 20:56:54.354110 master-0 kubenswrapper[7484]: I0312 20:56:54.353805 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" event={"ID":"32050f14-1939-41bf-a824-22016b90c189","Type":"ContainerStarted","Data":"0f3550a8aec9a486ca0cee3183a0d557f3a6f7dd69b026fe601996e8ee871591"} Mar 12 20:56:54.358451 master-0 kubenswrapper[7484]: I0312 20:56:54.358343 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" event={"ID":"67e68ff0-f54d-4973-bbe7-ed43ce542bc0","Type":"ContainerStarted","Data":"f6412ec366e621f5d99b6ef5fdb5da3a73dfb0709a661b8764731c1f9e4f0f11"} Mar 12 20:56:54.364468 master-0 kubenswrapper[7484]: I0312 20:56:54.364078 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerStarted","Data":"c63f4468bd86aa103b95bce5d28a42287cd68ad855d4dfda425a6f11d1825653"} Mar 12 20:56:54.365478 master-0 kubenswrapper[7484]: I0312 20:56:54.365443 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" event={"ID":"7f3afe47-c537-420c-b5be-1cad612e119d","Type":"ContainerStarted","Data":"f32413943fd7e46b94ba71c016cbccc87f018a39f90dbf119089416f4d147bd9"} Mar 12 20:56:54.368715 master-0 kubenswrapper[7484]: I0312 20:56:54.368675 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" event={"ID":"508cb83e-6f25-4235-8c56-b25b762ebcad","Type":"ContainerStarted","Data":"dc9a8ab3dbf9f510346d66800b49bfb55e672501ce824087dcdec36983ec6646"} Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371529 7484 generic.go:334] "Generic (PLEG): container finished" podID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerID="cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500" exitCode=0 Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371563 7484 generic.go:334] "Generic (PLEG): container finished" podID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerID="10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f" exitCode=0 Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371587 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" event={"ID":"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500","Type":"ContainerDied","Data":"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500"} Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371617 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" event={"ID":"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500","Type":"ContainerDied","Data":"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f"} Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371627 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" event={"ID":"2ce9bbb5-37b5-4b43-aeb6-904bd0d86500","Type":"ContainerDied","Data":"c5cc276a7bfe32028ff8bc4b02aec1db55a15e86468a746b888701a3caedbd11"} Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371646 7484 scope.go:117] "RemoveContainer" containerID="cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500" Mar 12 20:56:54.372180 master-0 kubenswrapper[7484]: I0312 20:56:54.371657 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl" Mar 12 20:56:54.400751 master-0 kubenswrapper[7484]: I0312 20:56:54.400696 7484 scope.go:117] "RemoveContainer" containerID="10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f" Mar 12 20:56:54.488985 master-0 kubenswrapper[7484]: I0312 20:56:54.488892 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-lc7jk"] Mar 12 20:56:54.494089 master-0 kubenswrapper[7484]: W0312 20:56:54.493365 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d1e064_c12b_4c1d_b499_4e301ca8a8dc.slice/crio-46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4 WatchSource:0}: Error finding container 46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4: Status 404 returned error can't find the container with id 46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4 Mar 12 20:56:54.533234 master-0 kubenswrapper[7484]: I0312 20:56:54.533169 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl"] Mar 12 20:56:54.543969 master-0 kubenswrapper[7484]: I0312 20:56:54.543798 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-57dhl"] Mar 12 20:56:54.544972 master-0 kubenswrapper[7484]: I0312 20:56:54.544864 7484 scope.go:117] "RemoveContainer" containerID="cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500" Mar 12 20:56:54.545393 master-0 kubenswrapper[7484]: E0312 20:56:54.545315 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500\": container with ID starting with cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500 not found: ID does not exist" containerID="cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500" Mar 12 20:56:54.545393 master-0 kubenswrapper[7484]: I0312 20:56:54.545348 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500"} err="failed to get container status \"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500\": rpc error: code = NotFound desc = could not find container \"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500\": container with ID starting with cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500 not found: ID does not exist" Mar 12 20:56:54.545393 master-0 kubenswrapper[7484]: I0312 20:56:54.545369 7484 scope.go:117] "RemoveContainer" containerID="10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f" Mar 12 20:56:54.546221 master-0 kubenswrapper[7484]: E0312 20:56:54.546162 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f\": container with ID starting with 10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f not found: ID does not exist" containerID="10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f" Mar 12 20:56:54.546221 master-0 kubenswrapper[7484]: I0312 20:56:54.546188 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f"} err="failed to get container status \"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f\": rpc error: code = NotFound desc = could not find container \"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f\": container with ID starting with 10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f not found: ID does not exist" Mar 12 20:56:54.546221 master-0 kubenswrapper[7484]: I0312 20:56:54.546202 7484 scope.go:117] "RemoveContainer" containerID="cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500" Mar 12 20:56:54.547316 master-0 kubenswrapper[7484]: I0312 20:56:54.547141 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500"} err="failed to get container status \"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500\": rpc error: code = NotFound desc = could not find container \"cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500\": container with ID starting with cf5a4e3fdfaf098fde310a2e55edff9907f8a106e9bfb0ed3d90b986edaaa500 not found: ID does not exist" Mar 12 20:56:54.547316 master-0 kubenswrapper[7484]: I0312 20:56:54.547172 7484 scope.go:117] "RemoveContainer" containerID="10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f" Mar 12 20:56:54.547465 master-0 kubenswrapper[7484]: I0312 20:56:54.547410 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f"} err="failed to get container status \"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f\": rpc error: code = NotFound desc = could not find container \"10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f\": container with ID starting with 10c5515cb5ef67581a8254d958271ba2cb6a67cbb6ee1c3b3b7f00d6f3e32b8f not found: ID does not exist" Mar 12 20:56:54.566326 master-0 kubenswrapper[7484]: I0312 20:56:54.566181 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht"] Mar 12 20:56:54.590321 master-0 kubenswrapper[7484]: I0312 20:56:54.590268 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb"] Mar 12 20:56:54.591176 master-0 kubenswrapper[7484]: E0312 20:56:54.590964 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="kube-rbac-proxy" Mar 12 20:56:54.591176 master-0 kubenswrapper[7484]: I0312 20:56:54.590998 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="kube-rbac-proxy" Mar 12 20:56:54.591176 master-0 kubenswrapper[7484]: E0312 20:56:54.591024 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="machine-approver-controller" Mar 12 20:56:54.591176 master-0 kubenswrapper[7484]: I0312 20:56:54.591030 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="machine-approver-controller" Mar 12 20:56:54.591176 master-0 kubenswrapper[7484]: I0312 20:56:54.591137 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="machine-approver-controller" Mar 12 20:56:54.591176 master-0 kubenswrapper[7484]: I0312 20:56:54.591148 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" containerName="kube-rbac-proxy" Mar 12 20:56:54.591750 master-0 kubenswrapper[7484]: I0312 20:56:54.591724 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.594605 master-0 kubenswrapper[7484]: I0312 20:56:54.594576 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 20:56:54.594874 master-0 kubenswrapper[7484]: I0312 20:56:54.594852 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5j2qf" Mar 12 20:56:54.594995 master-0 kubenswrapper[7484]: I0312 20:56:54.594977 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 20:56:54.595104 master-0 kubenswrapper[7484]: I0312 20:56:54.595085 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 20:56:54.595215 master-0 kubenswrapper[7484]: I0312 20:56:54.595197 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 20:56:54.595448 master-0 kubenswrapper[7484]: I0312 20:56:54.595411 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 20:56:54.635234 master-0 kubenswrapper[7484]: I0312 20:56:54.634789 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc"] Mar 12 20:56:54.653181 master-0 kubenswrapper[7484]: I0312 20:56:54.653126 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq"] Mar 12 20:56:54.666925 master-0 kubenswrapper[7484]: W0312 20:56:54.666118 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb71376ea_e248_48fc_b2c4_1de7236ddd31.slice/crio-f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57 WatchSource:0}: Error finding container f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57: Status 404 returned error can't find the container with id f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57 Mar 12 20:56:54.709866 master-0 kubenswrapper[7484]: I0312 20:56:54.705723 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.709866 master-0 kubenswrapper[7484]: I0312 20:56:54.705841 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.709866 master-0 kubenswrapper[7484]: I0312 20:56:54.705930 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.709866 master-0 kubenswrapper[7484]: I0312 20:56:54.705997 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt627\" (UniqueName: \"kubernetes.io/projected/400a13b5-c489-4beb-af33-94e635b86148-kube-api-access-vt627\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.807495 master-0 kubenswrapper[7484]: I0312 20:56:54.807345 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.807495 master-0 kubenswrapper[7484]: I0312 20:56:54.807397 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.807495 master-0 kubenswrapper[7484]: I0312 20:56:54.807436 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.809478 master-0 kubenswrapper[7484]: I0312 20:56:54.807952 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt627\" (UniqueName: \"kubernetes.io/projected/400a13b5-c489-4beb-af33-94e635b86148-kube-api-access-vt627\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.809478 master-0 kubenswrapper[7484]: I0312 20:56:54.808070 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.809478 master-0 kubenswrapper[7484]: I0312 20:56:54.808624 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.812860 master-0 kubenswrapper[7484]: I0312 20:56:54.812428 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.830221 master-0 kubenswrapper[7484]: I0312 20:56:54.828066 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt627\" (UniqueName: \"kubernetes.io/projected/400a13b5-c489-4beb-af33-94e635b86148-kube-api-access-vt627\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.914698 master-0 kubenswrapper[7484]: I0312 20:56:54.914636 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 20:56:54.945223 master-0 kubenswrapper[7484]: W0312 20:56:54.945166 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a13b5_c489_4beb_af33_94e635b86148.slice/crio-12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3 WatchSource:0}: Error finding container 12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3: Status 404 returned error can't find the container with id 12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3 Mar 12 20:56:55.404200 master-0 kubenswrapper[7484]: I0312 20:56:55.404058 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerStarted","Data":"64bbce37fffa0363fa6b0cb6661a450dd4f178dfa993fa7e87ca9427175696e1"} Mar 12 20:56:55.407482 master-0 kubenswrapper[7484]: I0312 20:56:55.407446 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" event={"ID":"400a13b5-c489-4beb-af33-94e635b86148","Type":"ContainerStarted","Data":"6331b392c2de83f3c4853f7963330fedc0e08a76d40157da0ce279bbca4ea061"} Mar 12 20:56:55.407482 master-0 kubenswrapper[7484]: I0312 20:56:55.407472 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" event={"ID":"400a13b5-c489-4beb-af33-94e635b86148","Type":"ContainerStarted","Data":"12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3"} Mar 12 20:56:55.409673 master-0 kubenswrapper[7484]: I0312 20:56:55.409643 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" event={"ID":"b71376ea-e248-48fc-b2c4-1de7236ddd31","Type":"ContainerStarted","Data":"5a330499467b99ee6ef0a4fe452c8160f9520b895622c1ac2e3d361e8e4227ae"} Mar 12 20:56:55.409673 master-0 kubenswrapper[7484]: I0312 20:56:55.409671 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" event={"ID":"b71376ea-e248-48fc-b2c4-1de7236ddd31","Type":"ContainerStarted","Data":"f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57"} Mar 12 20:56:55.415123 master-0 kubenswrapper[7484]: I0312 20:56:55.414885 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" event={"ID":"67e68ff0-f54d-4973-bbe7-ed43ce542bc0","Type":"ContainerStarted","Data":"55d4a33f648e96a2eb1e178611b81dc70f5fd1f0913c03a0af24bdc85fdc54c1"} Mar 12 20:56:55.419034 master-0 kubenswrapper[7484]: I0312 20:56:55.418994 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" event={"ID":"05fd1378-3935-4caf-96c5-17cf7e29417f","Type":"ContainerStarted","Data":"7d5c22ccf9d50761e4f0ddec1f67acdaa67bfb5cc6a0c548d5556afa0534fe8a"} Mar 12 20:56:55.419034 master-0 kubenswrapper[7484]: I0312 20:56:55.419035 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" event={"ID":"05fd1378-3935-4caf-96c5-17cf7e29417f","Type":"ContainerStarted","Data":"a8a8fe5d5bb4822dd7daf58bc0b49057e47a6aa6fcd9e303e14168c98652cb42"} Mar 12 20:56:55.421782 master-0 kubenswrapper[7484]: I0312 20:56:55.421727 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" event={"ID":"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc","Type":"ContainerStarted","Data":"46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4"} Mar 12 20:56:55.424055 master-0 kubenswrapper[7484]: I0312 20:56:55.424018 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" event={"ID":"508cb83e-6f25-4235-8c56-b25b762ebcad","Type":"ContainerStarted","Data":"f06ce6c09f98508a77d44a30d404bab8683cf157a2782c0c532af4eaa630089e"} Mar 12 20:56:55.424055 master-0 kubenswrapper[7484]: I0312 20:56:55.424048 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" event={"ID":"508cb83e-6f25-4235-8c56-b25b762ebcad","Type":"ContainerStarted","Data":"b9da34034a4775625020d205d9436694d65b54d0723190096309ce81aab32e93"} Mar 12 20:56:55.450146 master-0 kubenswrapper[7484]: I0312 20:56:55.450044 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" podStartSLOduration=2.449388886 podStartE2EDuration="2.449388886s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:56:55.44632122 +0000 UTC m=+427.931590012" watchObservedRunningTime="2026-03-12 20:56:55.449388886 +0000 UTC m=+427.934657678" Mar 12 20:56:55.745193 master-0 kubenswrapper[7484]: I0312 20:56:55.745072 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce9bbb5-37b5-4b43-aeb6-904bd0d86500" path="/var/lib/kubelet/pods/2ce9bbb5-37b5-4b43-aeb6-904bd0d86500/volumes" Mar 12 20:56:56.433158 master-0 kubenswrapper[7484]: I0312 20:56:56.433037 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" event={"ID":"400a13b5-c489-4beb-af33-94e635b86148","Type":"ContainerStarted","Data":"0a5780f6022da4e29888a4248f2002849d195cb3f0bde73988863a5f5ecbe533"} Mar 12 20:56:56.453951 master-0 kubenswrapper[7484]: I0312 20:56:56.453873 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" podStartSLOduration=2.453853606 podStartE2EDuration="2.453853606s" podCreationTimestamp="2026-03-12 20:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:56:56.45360311 +0000 UTC m=+428.938871912" watchObservedRunningTime="2026-03-12 20:56:56.453853606 +0000 UTC m=+428.939122408" Mar 12 20:56:57.269502 master-0 kubenswrapper[7484]: I0312 20:56:57.269427 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb"] Mar 12 20:56:57.921963 master-0 kubenswrapper[7484]: I0312 20:56:57.921904 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-n5wh9"] Mar 12 20:56:57.923622 master-0 kubenswrapper[7484]: I0312 20:56:57.923588 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:57.925892 master-0 kubenswrapper[7484]: I0312 20:56:57.925847 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-h7jv4" Mar 12 20:56:57.927096 master-0 kubenswrapper[7484]: I0312 20:56:57.927053 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 20:56:58.064291 master-0 kubenswrapper[7484]: I0312 20:56:58.064237 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d9152bd6-f203-469b-97fa-db274e43b40c-rootfs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.064515 master-0 kubenswrapper[7484]: I0312 20:56:58.064321 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.064515 master-0 kubenswrapper[7484]: I0312 20:56:58.064367 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.064515 master-0 kubenswrapper[7484]: I0312 20:56:58.064402 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9txs\" (UniqueName: \"kubernetes.io/projected/d9152bd6-f203-469b-97fa-db274e43b40c-kube-api-access-q9txs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.166109 master-0 kubenswrapper[7484]: I0312 20:56:58.166044 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d9152bd6-f203-469b-97fa-db274e43b40c-rootfs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.166337 master-0 kubenswrapper[7484]: I0312 20:56:58.166125 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.166337 master-0 kubenswrapper[7484]: I0312 20:56:58.166181 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d9152bd6-f203-469b-97fa-db274e43b40c-rootfs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.166337 master-0 kubenswrapper[7484]: I0312 20:56:58.166181 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.166468 master-0 kubenswrapper[7484]: I0312 20:56:58.166431 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9txs\" (UniqueName: \"kubernetes.io/projected/d9152bd6-f203-469b-97fa-db274e43b40c-kube-api-access-q9txs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.167231 master-0 kubenswrapper[7484]: I0312 20:56:58.167201 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.170847 master-0 kubenswrapper[7484]: I0312 20:56:58.170275 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.184538 master-0 kubenswrapper[7484]: I0312 20:56:58.184421 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9txs\" (UniqueName: \"kubernetes.io/projected/d9152bd6-f203-469b-97fa-db274e43b40c-kube-api-access-q9txs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:56:58.257417 master-0 kubenswrapper[7484]: I0312 20:56:58.257369 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 20:57:02.753147 master-0 kubenswrapper[7484]: I0312 20:57:02.753083 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 20:57:04.750289 master-0 kubenswrapper[7484]: W0312 20:57:04.750201 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9152bd6_f203_469b_97fa_db274e43b40c.slice/crio-d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b WatchSource:0}: Error finding container d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b: Status 404 returned error can't find the container with id d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b Mar 12 20:57:05.488788 master-0 kubenswrapper[7484]: I0312 20:57:05.488607 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" event={"ID":"b71376ea-e248-48fc-b2c4-1de7236ddd31","Type":"ContainerStarted","Data":"1174e3de7390f133d9714b1c4e07a2aef601c6b39a42d38f1fea541e106e1fb1"} Mar 12 20:57:05.497070 master-0 kubenswrapper[7484]: I0312 20:57:05.496385 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" event={"ID":"d9152bd6-f203-469b-97fa-db274e43b40c","Type":"ContainerStarted","Data":"e00cce717a06fbca7ed63b2c89233d3bb567483a83cea5d0ca7a8e7f29eb5a52"} Mar 12 20:57:05.497070 master-0 kubenswrapper[7484]: I0312 20:57:05.496438 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" event={"ID":"d9152bd6-f203-469b-97fa-db274e43b40c","Type":"ContainerStarted","Data":"f3f95a1e8c3712942d957d7cb410e2b9715ea8e446d0a38a6bfe58e1dd3e0711"} Mar 12 20:57:05.497070 master-0 kubenswrapper[7484]: I0312 20:57:05.496450 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" event={"ID":"d9152bd6-f203-469b-97fa-db274e43b40c","Type":"ContainerStarted","Data":"d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b"} Mar 12 20:57:05.505658 master-0 kubenswrapper[7484]: I0312 20:57:05.505470 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" event={"ID":"32050f14-1939-41bf-a824-22016b90c189","Type":"ContainerStarted","Data":"c632472ff79d1f3dfd4740ca411ea22df13cfa76649ffc8d7077b3baf071c089"} Mar 12 20:57:05.505658 master-0 kubenswrapper[7484]: I0312 20:57:05.505507 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" event={"ID":"32050f14-1939-41bf-a824-22016b90c189","Type":"ContainerStarted","Data":"69c369c7fadebfc86997960b2d1cb2c5d2240a26bfb69af5423c5f45182fd2bd"} Mar 12 20:57:05.513381 master-0 kubenswrapper[7484]: I0312 20:57:05.513309 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" podStartSLOduration=2.628671947 podStartE2EDuration="12.513291019s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.833399346 +0000 UTC m=+427.318668148" lastFinishedPulling="2026-03-12 20:57:04.718018418 +0000 UTC m=+437.203287220" observedRunningTime="2026-03-12 20:57:05.511536086 +0000 UTC m=+437.996804888" watchObservedRunningTime="2026-03-12 20:57:05.513291019 +0000 UTC m=+437.998559821" Mar 12 20:57:05.515069 master-0 kubenswrapper[7484]: I0312 20:57:05.514499 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" event={"ID":"67e68ff0-f54d-4973-bbe7-ed43ce542bc0","Type":"ContainerStarted","Data":"b7d1be82f9f49361682b3eacda43c7c489bc2b5e8762684eea2266a906f1e97a"} Mar 12 20:57:05.516715 master-0 kubenswrapper[7484]: I0312 20:57:05.516651 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerStarted","Data":"893f094f6286000fe2de79668d2072ec0492cc7cb88fdec2016afe30f90f76e5"} Mar 12 20:57:05.516715 master-0 kubenswrapper[7484]: I0312 20:57:05.516682 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerStarted","Data":"6dc411727752ae888d72d927bcde06522ded330928aadabe0e4e42b673281367"} Mar 12 20:57:05.518519 master-0 kubenswrapper[7484]: I0312 20:57:05.518451 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" event={"ID":"05fd1378-3935-4caf-96c5-17cf7e29417f","Type":"ContainerStarted","Data":"ee4213846b968113124aa21cee1c9002d94a33aa3b6b84d6f06541c14f04be97"} Mar 12 20:57:05.520679 master-0 kubenswrapper[7484]: I0312 20:57:05.520628 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerStarted","Data":"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f"} Mar 12 20:57:05.524172 master-0 kubenswrapper[7484]: I0312 20:57:05.523180 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" event={"ID":"7f3afe47-c537-420c-b5be-1cad612e119d","Type":"ContainerStarted","Data":"36e67678697aff60b4f84c6384733c369857b33eb259f71b1dbb059fc06204fb"} Mar 12 20:57:05.528351 master-0 kubenswrapper[7484]: I0312 20:57:05.527941 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" event={"ID":"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc","Type":"ContainerStarted","Data":"c17ad259a622e99ca36cca18286b94324be5b48db26b185444e6a0c5b69ee482"} Mar 12 20:57:05.570250 master-0 kubenswrapper[7484]: I0312 20:57:05.570148 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=3.5701268109999997 podStartE2EDuration="3.570126811s" podCreationTimestamp="2026-03-12 20:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:57:05.568316005 +0000 UTC m=+438.053584817" watchObservedRunningTime="2026-03-12 20:57:05.570126811 +0000 UTC m=+438.055395613" Mar 12 20:57:05.604880 master-0 kubenswrapper[7484]: I0312 20:57:05.604737 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" podStartSLOduration=8.604717773 podStartE2EDuration="8.604717773s" podCreationTimestamp="2026-03-12 20:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:57:05.604445037 +0000 UTC m=+438.089713869" watchObservedRunningTime="2026-03-12 20:57:05.604717773 +0000 UTC m=+438.089986585" Mar 12 20:57:05.636755 master-0 kubenswrapper[7484]: I0312 20:57:05.636665 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" podStartSLOduration=2.555842702 podStartE2EDuration="12.636641731s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.633695782 +0000 UTC m=+427.118964584" lastFinishedPulling="2026-03-12 20:57:04.714494811 +0000 UTC m=+437.199763613" observedRunningTime="2026-03-12 20:57:05.636400155 +0000 UTC m=+438.121668967" watchObservedRunningTime="2026-03-12 20:57:05.636641731 +0000 UTC m=+438.121910533" Mar 12 20:57:05.669091 master-0 kubenswrapper[7484]: I0312 20:57:05.668074 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" podStartSLOduration=2.26013618 podStartE2EDuration="12.668054796s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.306580395 +0000 UTC m=+426.791849197" lastFinishedPulling="2026-03-12 20:57:04.714499011 +0000 UTC m=+437.199767813" observedRunningTime="2026-03-12 20:57:05.667317017 +0000 UTC m=+438.152585819" watchObservedRunningTime="2026-03-12 20:57:05.668054796 +0000 UTC m=+438.153323598" Mar 12 20:57:05.689944 master-0 kubenswrapper[7484]: I0312 20:57:05.689427 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" podStartSLOduration=2.467778109 podStartE2EDuration="12.689408152s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.502759272 +0000 UTC m=+426.988028074" lastFinishedPulling="2026-03-12 20:57:04.724389315 +0000 UTC m=+437.209658117" observedRunningTime="2026-03-12 20:57:05.6885308 +0000 UTC m=+438.173799592" watchObservedRunningTime="2026-03-12 20:57:05.689408152 +0000 UTC m=+438.174676954" Mar 12 20:57:05.721786 master-0 kubenswrapper[7484]: I0312 20:57:05.721687 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" podStartSLOduration=2.165858834 podStartE2EDuration="12.721663107s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.184827202 +0000 UTC m=+426.670096004" lastFinishedPulling="2026-03-12 20:57:04.740631475 +0000 UTC m=+437.225900277" observedRunningTime="2026-03-12 20:57:05.718460069 +0000 UTC m=+438.203728871" watchObservedRunningTime="2026-03-12 20:57:05.721663107 +0000 UTC m=+438.206931909" Mar 12 20:57:05.751918 master-0 kubenswrapper[7484]: I0312 20:57:05.751053 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" podStartSLOduration=2.5384731130000002 podStartE2EDuration="12.751026442s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.615302438 +0000 UTC m=+427.100571230" lastFinishedPulling="2026-03-12 20:57:04.827855737 +0000 UTC m=+437.313124559" observedRunningTime="2026-03-12 20:57:05.743033944 +0000 UTC m=+438.228302746" watchObservedRunningTime="2026-03-12 20:57:05.751026442 +0000 UTC m=+438.236295234" Mar 12 20:57:05.768050 master-0 kubenswrapper[7484]: I0312 20:57:05.765950 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" podStartSLOduration=2.687716574 podStartE2EDuration="12.76593368s" podCreationTimestamp="2026-03-12 20:56:53 +0000 UTC" firstStartedPulling="2026-03-12 20:56:54.73830516 +0000 UTC m=+427.223573962" lastFinishedPulling="2026-03-12 20:57:04.816522246 +0000 UTC m=+437.301791068" observedRunningTime="2026-03-12 20:57:05.763858068 +0000 UTC m=+438.249126890" watchObservedRunningTime="2026-03-12 20:57:05.76593368 +0000 UTC m=+438.251202472" Mar 12 20:57:06.536907 master-0 kubenswrapper[7484]: I0312 20:57:06.536631 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerStarted","Data":"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d"} Mar 12 20:57:06.536907 master-0 kubenswrapper[7484]: I0312 20:57:06.536690 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerStarted","Data":"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48"} Mar 12 20:57:06.537525 master-0 kubenswrapper[7484]: I0312 20:57:06.537455 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="cluster-cloud-controller-manager" containerID="cri-o://b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" gracePeriod=30 Mar 12 20:57:06.537915 master-0 kubenswrapper[7484]: I0312 20:57:06.537824 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="kube-rbac-proxy" containerID="cri-o://46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" gracePeriod=30 Mar 12 20:57:06.537915 master-0 kubenswrapper[7484]: I0312 20:57:06.537874 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="config-sync-controllers" containerID="cri-o://1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" gracePeriod=30 Mar 12 20:57:07.300852 master-0 kubenswrapper[7484]: I0312 20:57:07.300379 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:57:07.411086 master-0 kubenswrapper[7484]: I0312 20:57:07.411017 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-auth-proxy-config\") pod \"dc757324-bbc7-480c-8f16-eb454cfce5b7\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411132 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc757324-bbc7-480c-8f16-eb454cfce5b7-cloud-controller-manager-operator-tls\") pod \"dc757324-bbc7-480c-8f16-eb454cfce5b7\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411178 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr8xl\" (UniqueName: \"kubernetes.io/projected/dc757324-bbc7-480c-8f16-eb454cfce5b7-kube-api-access-mr8xl\") pod \"dc757324-bbc7-480c-8f16-eb454cfce5b7\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411221 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc757324-bbc7-480c-8f16-eb454cfce5b7-host-etc-kube\") pod \"dc757324-bbc7-480c-8f16-eb454cfce5b7\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411258 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-images\") pod \"dc757324-bbc7-480c-8f16-eb454cfce5b7\" (UID: \"dc757324-bbc7-480c-8f16-eb454cfce5b7\") " Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411315 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc757324-bbc7-480c-8f16-eb454cfce5b7-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "dc757324-bbc7-480c-8f16-eb454cfce5b7" (UID: "dc757324-bbc7-480c-8f16-eb454cfce5b7"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411494 7484 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc757324-bbc7-480c-8f16-eb454cfce5b7-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 12 20:57:07.412085 master-0 kubenswrapper[7484]: I0312 20:57:07.411951 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "dc757324-bbc7-480c-8f16-eb454cfce5b7" (UID: "dc757324-bbc7-480c-8f16-eb454cfce5b7"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:57:07.412488 master-0 kubenswrapper[7484]: I0312 20:57:07.412118 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-images" (OuterVolumeSpecName: "images") pod "dc757324-bbc7-480c-8f16-eb454cfce5b7" (UID: "dc757324-bbc7-480c-8f16-eb454cfce5b7"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 20:57:07.414261 master-0 kubenswrapper[7484]: I0312 20:57:07.414206 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc757324-bbc7-480c-8f16-eb454cfce5b7-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "dc757324-bbc7-480c-8f16-eb454cfce5b7" (UID: "dc757324-bbc7-480c-8f16-eb454cfce5b7"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 20:57:07.415228 master-0 kubenswrapper[7484]: I0312 20:57:07.415179 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc757324-bbc7-480c-8f16-eb454cfce5b7-kube-api-access-mr8xl" (OuterVolumeSpecName: "kube-api-access-mr8xl") pod "dc757324-bbc7-480c-8f16-eb454cfce5b7" (UID: "dc757324-bbc7-480c-8f16-eb454cfce5b7"). InnerVolumeSpecName "kube-api-access-mr8xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 20:57:07.512361 master-0 kubenswrapper[7484]: I0312 20:57:07.512207 7484 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 12 20:57:07.512361 master-0 kubenswrapper[7484]: I0312 20:57:07.512259 7484 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc757324-bbc7-480c-8f16-eb454cfce5b7-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 20:57:07.512361 master-0 kubenswrapper[7484]: I0312 20:57:07.512273 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr8xl\" (UniqueName: \"kubernetes.io/projected/dc757324-bbc7-480c-8f16-eb454cfce5b7-kube-api-access-mr8xl\") on node \"master-0\" DevicePath \"\"" Mar 12 20:57:07.512361 master-0 kubenswrapper[7484]: I0312 20:57:07.512285 7484 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc757324-bbc7-480c-8f16-eb454cfce5b7-images\") on node \"master-0\" DevicePath \"\"" Mar 12 20:57:07.545006 master-0 kubenswrapper[7484]: I0312 20:57:07.544933 7484 generic.go:334] "Generic (PLEG): container finished" podID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerID="46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" exitCode=0 Mar 12 20:57:07.545006 master-0 kubenswrapper[7484]: I0312 20:57:07.544986 7484 generic.go:334] "Generic (PLEG): container finished" podID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerID="1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" exitCode=0 Mar 12 20:57:07.545006 master-0 kubenswrapper[7484]: I0312 20:57:07.544999 7484 generic.go:334] "Generic (PLEG): container finished" podID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerID="b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" exitCode=0 Mar 12 20:57:07.545278 master-0 kubenswrapper[7484]: I0312 20:57:07.545055 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" Mar 12 20:57:07.545278 master-0 kubenswrapper[7484]: I0312 20:57:07.545066 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerDied","Data":"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d"} Mar 12 20:57:07.545278 master-0 kubenswrapper[7484]: I0312 20:57:07.545197 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerDied","Data":"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48"} Mar 12 20:57:07.545278 master-0 kubenswrapper[7484]: I0312 20:57:07.545260 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerDied","Data":"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f"} Mar 12 20:57:07.545497 master-0 kubenswrapper[7484]: I0312 20:57:07.545280 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb" event={"ID":"dc757324-bbc7-480c-8f16-eb454cfce5b7","Type":"ContainerDied","Data":"c63f4468bd86aa103b95bce5d28a42287cd68ad855d4dfda425a6f11d1825653"} Mar 12 20:57:07.545497 master-0 kubenswrapper[7484]: I0312 20:57:07.545296 7484 scope.go:117] "RemoveContainer" containerID="46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" Mar 12 20:57:07.569054 master-0 kubenswrapper[7484]: I0312 20:57:07.569019 7484 scope.go:117] "RemoveContainer" containerID="1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" Mar 12 20:57:07.586951 master-0 kubenswrapper[7484]: I0312 20:57:07.586898 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb"] Mar 12 20:57:07.590991 master-0 kubenswrapper[7484]: I0312 20:57:07.590880 7484 scope.go:117] "RemoveContainer" containerID="b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" Mar 12 20:57:07.597579 master-0 kubenswrapper[7484]: I0312 20:57:07.597505 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-ml8vb"] Mar 12 20:57:07.618585 master-0 kubenswrapper[7484]: I0312 20:57:07.618505 7484 scope.go:117] "RemoveContainer" containerID="46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" Mar 12 20:57:07.619608 master-0 kubenswrapper[7484]: E0312 20:57:07.619546 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": container with ID starting with 46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d not found: ID does not exist" containerID="46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" Mar 12 20:57:07.619673 master-0 kubenswrapper[7484]: I0312 20:57:07.619610 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d"} err="failed to get container status \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": rpc error: code = NotFound desc = could not find container \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": container with ID starting with 46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d not found: ID does not exist" Mar 12 20:57:07.619673 master-0 kubenswrapper[7484]: I0312 20:57:07.619645 7484 scope.go:117] "RemoveContainer" containerID="1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" Mar 12 20:57:07.620137 master-0 kubenswrapper[7484]: E0312 20:57:07.620092 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": container with ID starting with 1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48 not found: ID does not exist" containerID="1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" Mar 12 20:57:07.620197 master-0 kubenswrapper[7484]: I0312 20:57:07.620147 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48"} err="failed to get container status \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": rpc error: code = NotFound desc = could not find container \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": container with ID starting with 1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48 not found: ID does not exist" Mar 12 20:57:07.620197 master-0 kubenswrapper[7484]: I0312 20:57:07.620187 7484 scope.go:117] "RemoveContainer" containerID="b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" Mar 12 20:57:07.621049 master-0 kubenswrapper[7484]: E0312 20:57:07.620779 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": container with ID starting with b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f not found: ID does not exist" containerID="b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" Mar 12 20:57:07.621049 master-0 kubenswrapper[7484]: I0312 20:57:07.620857 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f"} err="failed to get container status \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": rpc error: code = NotFound desc = could not find container \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": container with ID starting with b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f not found: ID does not exist" Mar 12 20:57:07.621049 master-0 kubenswrapper[7484]: I0312 20:57:07.620888 7484 scope.go:117] "RemoveContainer" containerID="46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" Mar 12 20:57:07.621380 master-0 kubenswrapper[7484]: I0312 20:57:07.621305 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d"} err="failed to get container status \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": rpc error: code = NotFound desc = could not find container \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": container with ID starting with 46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d not found: ID does not exist" Mar 12 20:57:07.621426 master-0 kubenswrapper[7484]: I0312 20:57:07.621386 7484 scope.go:117] "RemoveContainer" containerID="1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" Mar 12 20:57:07.621706 master-0 kubenswrapper[7484]: I0312 20:57:07.621661 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48"} err="failed to get container status \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": rpc error: code = NotFound desc = could not find container \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": container with ID starting with 1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48 not found: ID does not exist" Mar 12 20:57:07.621706 master-0 kubenswrapper[7484]: I0312 20:57:07.621699 7484 scope.go:117] "RemoveContainer" containerID="b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" Mar 12 20:57:07.622475 master-0 kubenswrapper[7484]: I0312 20:57:07.622423 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f"} err="failed to get container status \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": rpc error: code = NotFound desc = could not find container \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": container with ID starting with b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f not found: ID does not exist" Mar 12 20:57:07.622475 master-0 kubenswrapper[7484]: I0312 20:57:07.622464 7484 scope.go:117] "RemoveContainer" containerID="46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d" Mar 12 20:57:07.622939 master-0 kubenswrapper[7484]: I0312 20:57:07.622891 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d"} err="failed to get container status \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": rpc error: code = NotFound desc = could not find container \"46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d\": container with ID starting with 46080b13bef182194efd2b2d3608506ec7883ec4491083fb69b6225e99e74e0d not found: ID does not exist" Mar 12 20:57:07.622939 master-0 kubenswrapper[7484]: I0312 20:57:07.622927 7484 scope.go:117] "RemoveContainer" containerID="1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48" Mar 12 20:57:07.623290 master-0 kubenswrapper[7484]: I0312 20:57:07.623238 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48"} err="failed to get container status \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": rpc error: code = NotFound desc = could not find container \"1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48\": container with ID starting with 1430b04037ef547f317ef717574f05cb1c31ebbd55b6458ad724305d19dccf48 not found: ID does not exist" Mar 12 20:57:07.623290 master-0 kubenswrapper[7484]: I0312 20:57:07.623279 7484 scope.go:117] "RemoveContainer" containerID="b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f" Mar 12 20:57:07.623617 master-0 kubenswrapper[7484]: I0312 20:57:07.623569 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f"} err="failed to get container status \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": rpc error: code = NotFound desc = could not find container \"b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f\": container with ID starting with b0982d9c490e0d4f03bb14267018977e3cf1434a3e3152b5870ec27a41ce753f not found: ID does not exist" Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: I0312 20:57:07.640570 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl"] Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: E0312 20:57:07.640835 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="kube-rbac-proxy" Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: I0312 20:57:07.640847 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="kube-rbac-proxy" Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: E0312 20:57:07.640859 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="config-sync-controllers" Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: I0312 20:57:07.640876 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="config-sync-controllers" Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: E0312 20:57:07.640905 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="cluster-cloud-controller-manager" Mar 12 20:57:07.640965 master-0 kubenswrapper[7484]: I0312 20:57:07.640917 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="cluster-cloud-controller-manager" Mar 12 20:57:07.641745 master-0 kubenswrapper[7484]: I0312 20:57:07.641025 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="config-sync-controllers" Mar 12 20:57:07.641745 master-0 kubenswrapper[7484]: I0312 20:57:07.641049 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="cluster-cloud-controller-manager" Mar 12 20:57:07.641745 master-0 kubenswrapper[7484]: I0312 20:57:07.641064 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" containerName="kube-rbac-proxy" Mar 12 20:57:07.641993 master-0 kubenswrapper[7484]: I0312 20:57:07.641863 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.644334 master-0 kubenswrapper[7484]: I0312 20:57:07.644268 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 20:57:07.644508 master-0 kubenswrapper[7484]: I0312 20:57:07.644423 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-r4pnh" Mar 12 20:57:07.644508 master-0 kubenswrapper[7484]: I0312 20:57:07.644496 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 20:57:07.644948 master-0 kubenswrapper[7484]: I0312 20:57:07.644883 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 20:57:07.645483 master-0 kubenswrapper[7484]: I0312 20:57:07.645454 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 20:57:07.648935 master-0 kubenswrapper[7484]: I0312 20:57:07.648894 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 20:57:07.715608 master-0 kubenswrapper[7484]: I0312 20:57:07.715412 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp4mt\" (UniqueName: \"kubernetes.io/projected/f8467055-c9c9-4485-bb60-9a79e8b91268-kube-api-access-gp4mt\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.715608 master-0 kubenswrapper[7484]: I0312 20:57:07.715510 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.716043 master-0 kubenswrapper[7484]: I0312 20:57:07.715670 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.716043 master-0 kubenswrapper[7484]: I0312 20:57:07.715731 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f8467055-c9c9-4485-bb60-9a79e8b91268-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.716043 master-0 kubenswrapper[7484]: I0312 20:57:07.715794 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.745287 master-0 kubenswrapper[7484]: I0312 20:57:07.745204 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc757324-bbc7-480c-8f16-eb454cfce5b7" path="/var/lib/kubelet/pods/dc757324-bbc7-480c-8f16-eb454cfce5b7/volumes" Mar 12 20:57:07.817933 master-0 kubenswrapper[7484]: I0312 20:57:07.817878 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.818351 master-0 kubenswrapper[7484]: I0312 20:57:07.818321 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp4mt\" (UniqueName: \"kubernetes.io/projected/f8467055-c9c9-4485-bb60-9a79e8b91268-kube-api-access-gp4mt\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.818536 master-0 kubenswrapper[7484]: I0312 20:57:07.818504 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.818787 master-0 kubenswrapper[7484]: I0312 20:57:07.818758 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.819004 master-0 kubenswrapper[7484]: I0312 20:57:07.818977 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f8467055-c9c9-4485-bb60-9a79e8b91268-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.819217 master-0 kubenswrapper[7484]: I0312 20:57:07.819120 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f8467055-c9c9-4485-bb60-9a79e8b91268-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.819385 master-0 kubenswrapper[7484]: I0312 20:57:07.819223 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.819541 master-0 kubenswrapper[7484]: I0312 20:57:07.819355 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.824954 master-0 kubenswrapper[7484]: I0312 20:57:07.824892 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.846324 master-0 kubenswrapper[7484]: I0312 20:57:07.846266 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp4mt\" (UniqueName: \"kubernetes.io/projected/f8467055-c9c9-4485-bb60-9a79e8b91268-kube-api-access-gp4mt\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.964936 master-0 kubenswrapper[7484]: I0312 20:57:07.964873 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 20:57:07.990138 master-0 kubenswrapper[7484]: W0312 20:57:07.990081 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8467055_c9c9_4485_bb60_9a79e8b91268.slice/crio-f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749 WatchSource:0}: Error finding container f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749: Status 404 returned error can't find the container with id f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749 Mar 12 20:57:08.563602 master-0 kubenswrapper[7484]: I0312 20:57:08.560102 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerStarted","Data":"35a48c44f0a4c7fdef814d1fdd69f5e797632637da5b33039378ae2cc0e1e688"} Mar 12 20:57:08.563602 master-0 kubenswrapper[7484]: I0312 20:57:08.560214 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerStarted","Data":"f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749"} Mar 12 20:57:08.933987 master-0 kubenswrapper[7484]: I0312 20:57:08.933910 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8"] Mar 12 20:57:08.935017 master-0 kubenswrapper[7484]: I0312 20:57:08.934982 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:08.937422 master-0 kubenswrapper[7484]: I0312 20:57:08.937373 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 20:57:08.937608 master-0 kubenswrapper[7484]: I0312 20:57:08.937583 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-lrwqt" Mar 12 20:57:08.950219 master-0 kubenswrapper[7484]: I0312 20:57:08.950001 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8"] Mar 12 20:57:09.038635 master-0 kubenswrapper[7484]: I0312 20:57:09.038533 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfn6\" (UniqueName: \"kubernetes.io/projected/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-kube-api-access-2rfn6\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.038635 master-0 kubenswrapper[7484]: I0312 20:57:09.038595 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.038978 master-0 kubenswrapper[7484]: I0312 20:57:09.038676 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.141117 master-0 kubenswrapper[7484]: I0312 20:57:09.140880 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.141316 master-0 kubenswrapper[7484]: I0312 20:57:09.141174 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.141316 master-0 kubenswrapper[7484]: I0312 20:57:09.141292 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfn6\" (UniqueName: \"kubernetes.io/projected/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-kube-api-access-2rfn6\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.143471 master-0 kubenswrapper[7484]: I0312 20:57:09.143385 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.145701 master-0 kubenswrapper[7484]: I0312 20:57:09.145653 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.200923 master-0 kubenswrapper[7484]: I0312 20:57:09.200861 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfn6\" (UniqueName: \"kubernetes.io/projected/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-kube-api-access-2rfn6\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.256536 master-0 kubenswrapper[7484]: I0312 20:57:09.256439 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 20:57:09.576401 master-0 kubenswrapper[7484]: I0312 20:57:09.576325 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerStarted","Data":"ff71fea8bd50fe855ba215559a47a19999ebfe476ccf6050e9ff7dbdcfb3a30f"} Mar 12 20:57:09.576401 master-0 kubenswrapper[7484]: I0312 20:57:09.576383 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerStarted","Data":"18344b8e4a33f4c35bb70a4b908fe016ad02097c53ac346b4a920c21a96bb7bc"} Mar 12 20:57:09.601928 master-0 kubenswrapper[7484]: I0312 20:57:09.601786 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" podStartSLOduration=2.601756269 podStartE2EDuration="2.601756269s" podCreationTimestamp="2026-03-12 20:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:57:09.601024232 +0000 UTC m=+442.086293064" watchObservedRunningTime="2026-03-12 20:57:09.601756269 +0000 UTC m=+442.087025101" Mar 12 20:57:09.671584 master-0 kubenswrapper[7484]: I0312 20:57:09.671433 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8"] Mar 12 20:57:09.681110 master-0 kubenswrapper[7484]: W0312 20:57:09.681031 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90f0e4da_71d4_4c4e_a2fc_9ef588daaf72.slice/crio-9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e WatchSource:0}: Error finding container 9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e: Status 404 returned error can't find the container with id 9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e Mar 12 20:57:10.080610 master-0 kubenswrapper[7484]: I0312 20:57:10.080524 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-hsv57"] Mar 12 20:57:10.082001 master-0 kubenswrapper[7484]: I0312 20:57:10.081955 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.085449 master-0 kubenswrapper[7484]: I0312 20:57:10.085371 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6"] Mar 12 20:57:10.085711 master-0 kubenswrapper[7484]: I0312 20:57:10.085626 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 20:57:10.087122 master-0 kubenswrapper[7484]: I0312 20:57:10.087085 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 20:57:10.087196 master-0 kubenswrapper[7484]: I0312 20:57:10.087099 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 20:57:10.087480 master-0 kubenswrapper[7484]: I0312 20:57:10.087437 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 20:57:10.088276 master-0 kubenswrapper[7484]: I0312 20:57:10.088229 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 20:57:10.088338 master-0 kubenswrapper[7484]: I0312 20:57:10.088233 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 20:57:10.088338 master-0 kubenswrapper[7484]: I0312 20:57:10.088273 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 20:57:10.091530 master-0 kubenswrapper[7484]: I0312 20:57:10.091477 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk"] Mar 12 20:57:10.094417 master-0 kubenswrapper[7484]: I0312 20:57:10.094318 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:10.097246 master-0 kubenswrapper[7484]: I0312 20:57:10.097180 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 20:57:10.111802 master-0 kubenswrapper[7484]: I0312 20:57:10.111757 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6"] Mar 12 20:57:10.114713 master-0 kubenswrapper[7484]: I0312 20:57:10.114667 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk"] Mar 12 20:57:10.157336 master-0 kubenswrapper[7484]: I0312 20:57:10.157283 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-default-certificate\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.157440 master-0 kubenswrapper[7484]: I0312 20:57:10.157362 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3828a1d-8180-4c7b-b423-4488f7fc0b76-service-ca-bundle\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.157440 master-0 kubenswrapper[7484]: I0312 20:57:10.157422 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmtk\" (UID: \"90f16d8c-25b6-4827-85d9-0995e4e1ab15\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:10.157692 master-0 kubenswrapper[7484]: I0312 20:57:10.157617 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf28c\" (UniqueName: \"kubernetes.io/projected/a3828a1d-8180-4c7b-b423-4488f7fc0b76-kube-api-access-lf28c\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.157918 master-0 kubenswrapper[7484]: I0312 20:57:10.157867 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-metrics-certs\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.158034 master-0 kubenswrapper[7484]: I0312 20:57:10.158006 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-stats-auth\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.158088 master-0 kubenswrapper[7484]: I0312 20:57:10.158065 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqbt\" (UniqueName: \"kubernetes.io/projected/cc7b96ab-01af-442a-8eda-fc59e665a367-kube-api-access-vwqbt\") pod \"network-check-source-7c67b67d47-bv4x6\" (UID: \"cc7b96ab-01af-442a-8eda-fc59e665a367\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 20:57:10.260317 master-0 kubenswrapper[7484]: I0312 20:57:10.260246 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf28c\" (UniqueName: \"kubernetes.io/projected/a3828a1d-8180-4c7b-b423-4488f7fc0b76-kube-api-access-lf28c\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.260834 master-0 kubenswrapper[7484]: I0312 20:57:10.260733 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-metrics-certs\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.261369 master-0 kubenswrapper[7484]: I0312 20:57:10.261321 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-stats-auth\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.261426 master-0 kubenswrapper[7484]: I0312 20:57:10.261402 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqbt\" (UniqueName: \"kubernetes.io/projected/cc7b96ab-01af-442a-8eda-fc59e665a367-kube-api-access-vwqbt\") pod \"network-check-source-7c67b67d47-bv4x6\" (UID: \"cc7b96ab-01af-442a-8eda-fc59e665a367\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 20:57:10.261470 master-0 kubenswrapper[7484]: I0312 20:57:10.261456 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-default-certificate\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.262068 master-0 kubenswrapper[7484]: I0312 20:57:10.262011 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3828a1d-8180-4c7b-b423-4488f7fc0b76-service-ca-bundle\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.262257 master-0 kubenswrapper[7484]: I0312 20:57:10.262170 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmtk\" (UID: \"90f16d8c-25b6-4827-85d9-0995e4e1ab15\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:10.263826 master-0 kubenswrapper[7484]: I0312 20:57:10.263754 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3828a1d-8180-4c7b-b423-4488f7fc0b76-service-ca-bundle\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.266699 master-0 kubenswrapper[7484]: I0312 20:57:10.266673 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-stats-auth\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.268354 master-0 kubenswrapper[7484]: I0312 20:57:10.268294 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmtk\" (UID: \"90f16d8c-25b6-4827-85d9-0995e4e1ab15\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:10.271772 master-0 kubenswrapper[7484]: I0312 20:57:10.271722 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-metrics-certs\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.271914 master-0 kubenswrapper[7484]: I0312 20:57:10.271839 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-default-certificate\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.289866 master-0 kubenswrapper[7484]: I0312 20:57:10.289830 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf28c\" (UniqueName: \"kubernetes.io/projected/a3828a1d-8180-4c7b-b423-4488f7fc0b76-kube-api-access-lf28c\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.294154 master-0 kubenswrapper[7484]: I0312 20:57:10.294071 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqbt\" (UniqueName: \"kubernetes.io/projected/cc7b96ab-01af-442a-8eda-fc59e665a367-kube-api-access-vwqbt\") pod \"network-check-source-7c67b67d47-bv4x6\" (UID: \"cc7b96ab-01af-442a-8eda-fc59e665a367\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 20:57:10.410951 master-0 kubenswrapper[7484]: I0312 20:57:10.410763 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:10.443464 master-0 kubenswrapper[7484]: W0312 20:57:10.443405 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3828a1d_8180_4c7b_b423_4488f7fc0b76.slice/crio-a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406 WatchSource:0}: Error finding container a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406: Status 404 returned error can't find the container with id a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406 Mar 12 20:57:10.450550 master-0 kubenswrapper[7484]: I0312 20:57:10.450489 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 20:57:10.471784 master-0 kubenswrapper[7484]: I0312 20:57:10.471355 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:10.603483 master-0 kubenswrapper[7484]: I0312 20:57:10.603213 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerStarted","Data":"a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406"} Mar 12 20:57:10.606162 master-0 kubenswrapper[7484]: I0312 20:57:10.606111 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" event={"ID":"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72","Type":"ContainerStarted","Data":"0e92c489da498c72fe567f2ce11c3639307cada3c51bee43e0ae2d9a055f37be"} Mar 12 20:57:10.606260 master-0 kubenswrapper[7484]: I0312 20:57:10.606195 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" event={"ID":"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72","Type":"ContainerStarted","Data":"abe372f4a5201ee9f2be20bd5b5a3dc0db95881ce3285f6e1c8475b0ef9714a6"} Mar 12 20:57:10.606260 master-0 kubenswrapper[7484]: I0312 20:57:10.606216 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" event={"ID":"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72","Type":"ContainerStarted","Data":"9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e"} Mar 12 20:57:10.640465 master-0 kubenswrapper[7484]: I0312 20:57:10.638996 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" podStartSLOduration=2.638972117 podStartE2EDuration="2.638972117s" podCreationTimestamp="2026-03-12 20:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:57:10.636976627 +0000 UTC m=+443.122245539" watchObservedRunningTime="2026-03-12 20:57:10.638972117 +0000 UTC m=+443.124240929" Mar 12 20:57:10.931464 master-0 kubenswrapper[7484]: I0312 20:57:10.931366 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6"] Mar 12 20:57:11.009097 master-0 kubenswrapper[7484]: I0312 20:57:11.009051 7484 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 20:57:11.021056 master-0 kubenswrapper[7484]: I0312 20:57:11.021014 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk"] Mar 12 20:57:11.612700 master-0 kubenswrapper[7484]: I0312 20:57:11.612618 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" event={"ID":"cc7b96ab-01af-442a-8eda-fc59e665a367","Type":"ContainerStarted","Data":"8e599aa042738ddf49ed46c68087f754814a6d2835865abc990507f9b4b2c89e"} Mar 12 20:57:11.612700 master-0 kubenswrapper[7484]: I0312 20:57:11.612676 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" event={"ID":"cc7b96ab-01af-442a-8eda-fc59e665a367","Type":"ContainerStarted","Data":"ea7954299aa7bc681bbf2b7473af9292483dacae799b21a6511a23f7d0fb2fd7"} Mar 12 20:57:11.615250 master-0 kubenswrapper[7484]: I0312 20:57:11.615217 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" event={"ID":"90f16d8c-25b6-4827-85d9-0995e4e1ab15","Type":"ContainerStarted","Data":"3f2fe9b256b0661c08a4a3ada19e5a95335c69cff21bdc38412e044b0f329672"} Mar 12 20:57:11.706442 master-0 kubenswrapper[7484]: I0312 20:57:11.706357 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" podStartSLOduration=495.706337087 podStartE2EDuration="8m15.706337087s" podCreationTimestamp="2026-03-12 20:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:57:11.701718514 +0000 UTC m=+444.186987326" watchObservedRunningTime="2026-03-12 20:57:11.706337087 +0000 UTC m=+444.191605899" Mar 12 20:57:13.288800 master-0 kubenswrapper[7484]: I0312 20:57:13.288704 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-9j7rx_a3bebf49-1d92-4353-b84c-91ed86b7bb94/authentication-operator/1.log" Mar 12 20:57:13.433778 master-0 kubenswrapper[7484]: I0312 20:57:13.433726 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mz2sr"] Mar 12 20:57:13.435210 master-0 kubenswrapper[7484]: I0312 20:57:13.435146 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.437901 master-0 kubenswrapper[7484]: I0312 20:57:13.437364 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-ct6dn" Mar 12 20:57:13.438987 master-0 kubenswrapper[7484]: I0312 20:57:13.438436 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 20:57:13.438987 master-0 kubenswrapper[7484]: I0312 20:57:13.438537 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 20:57:13.492582 master-0 kubenswrapper[7484]: I0312 20:57:13.492514 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-9j7rx_a3bebf49-1d92-4353-b84c-91ed86b7bb94/authentication-operator/2.log" Mar 12 20:57:13.534433 master-0 kubenswrapper[7484]: I0312 20:57:13.534361 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.534682 master-0 kubenswrapper[7484]: I0312 20:57:13.534461 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvxh\" (UniqueName: \"kubernetes.io/projected/a5d6705e-e564-4774-94b4-ef11956c67b2-kube-api-access-dkvxh\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.534682 master-0 kubenswrapper[7484]: I0312 20:57:13.534564 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.638794 master-0 kubenswrapper[7484]: I0312 20:57:13.638707 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvxh\" (UniqueName: \"kubernetes.io/projected/a5d6705e-e564-4774-94b4-ef11956c67b2-kube-api-access-dkvxh\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.639093 master-0 kubenswrapper[7484]: I0312 20:57:13.638845 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.639093 master-0 kubenswrapper[7484]: I0312 20:57:13.638895 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.641915 master-0 kubenswrapper[7484]: I0312 20:57:13.639804 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" event={"ID":"90f16d8c-25b6-4827-85d9-0995e4e1ab15","Type":"ContainerStarted","Data":"e36aa54c51f3db0250aa5133b223c534953ac7dbe77ba9843e508652c98db306"} Mar 12 20:57:13.641915 master-0 kubenswrapper[7484]: I0312 20:57:13.640837 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:13.642964 master-0 kubenswrapper[7484]: I0312 20:57:13.642932 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.645536 master-0 kubenswrapper[7484]: I0312 20:57:13.645484 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.645536 master-0 kubenswrapper[7484]: I0312 20:57:13.645524 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerStarted","Data":"41145e0fa78e157774eb7d7a70c1dca5f300d506a37a6e9227272112a6ab2153"} Mar 12 20:57:13.646786 master-0 kubenswrapper[7484]: I0312 20:57:13.646725 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 20:57:13.669186 master-0 kubenswrapper[7484]: I0312 20:57:13.669115 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvxh\" (UniqueName: \"kubernetes.io/projected/a5d6705e-e564-4774-94b4-ef11956c67b2-kube-api-access-dkvxh\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.678540 master-0 kubenswrapper[7484]: I0312 20:57:13.678419 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" podStartSLOduration=399.511910633 podStartE2EDuration="6m41.678385278s" podCreationTimestamp="2026-03-12 20:50:32 +0000 UTC" firstStartedPulling="2026-03-12 20:57:11.038034507 +0000 UTC m=+443.523303349" lastFinishedPulling="2026-03-12 20:57:13.204509152 +0000 UTC m=+445.689777994" observedRunningTime="2026-03-12 20:57:13.673780354 +0000 UTC m=+446.159049196" watchObservedRunningTime="2026-03-12 20:57:13.678385278 +0000 UTC m=+446.163654090" Mar 12 20:57:13.683830 master-0 kubenswrapper[7484]: I0312 20:57:13.683775 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-hsv57_a3828a1d-8180-4c7b-b423-4488f7fc0b76/router/0.log" Mar 12 20:57:13.710406 master-0 kubenswrapper[7484]: I0312 20:57:13.710162 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podStartSLOduration=413.954303502 podStartE2EDuration="6m56.71014573s" podCreationTimestamp="2026-03-12 20:50:17 +0000 UTC" firstStartedPulling="2026-03-12 20:57:10.446771647 +0000 UTC m=+442.932040449" lastFinishedPulling="2026-03-12 20:57:13.202613865 +0000 UTC m=+445.687882677" observedRunningTime="2026-03-12 20:57:13.70606305 +0000 UTC m=+446.191331862" watchObservedRunningTime="2026-03-12 20:57:13.71014573 +0000 UTC m=+446.195414542" Mar 12 20:57:13.777340 master-0 kubenswrapper[7484]: I0312 20:57:13.777247 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 20:57:13.801484 master-0 kubenswrapper[7484]: W0312 20:57:13.799639 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d6705e_e564_4774_94b4_ef11956c67b2.slice/crio-bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf WatchSource:0}: Error finding container bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf: Status 404 returned error can't find the container with id bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf Mar 12 20:57:13.880335 master-0 kubenswrapper[7484]: I0312 20:57:13.880270 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7946996f87-nzb7c_36bd483b-292e-4e82-99d6-daa612cd385a/fix-audit-permissions/0.log" Mar 12 20:57:14.084855 master-0 kubenswrapper[7484]: I0312 20:57:14.084779 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7946996f87-nzb7c_36bd483b-292e-4e82-99d6-daa612cd385a/oauth-apiserver/0.log" Mar 12 20:57:14.292143 master-0 kubenswrapper[7484]: I0312 20:57:14.292051 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-xh6r9_5471994f-769e-4124-b7d0-01f5358fc18f/etcd-operator/0.log" Mar 12 20:57:14.411791 master-0 kubenswrapper[7484]: I0312 20:57:14.411591 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:14.416154 master-0 kubenswrapper[7484]: I0312 20:57:14.416098 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:14.416154 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:14.416154 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:14.416154 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:14.416425 master-0 kubenswrapper[7484]: I0312 20:57:14.416175 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:14.484137 master-0 kubenswrapper[7484]: I0312 20:57:14.484069 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl"] Mar 12 20:57:14.485636 master-0 kubenswrapper[7484]: I0312 20:57:14.485600 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.490099 master-0 kubenswrapper[7484]: I0312 20:57:14.490069 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-rgtlp" Mar 12 20:57:14.490334 master-0 kubenswrapper[7484]: I0312 20:57:14.490301 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-xh6r9_5471994f-769e-4124-b7d0-01f5358fc18f/etcd-operator/1.log" Mar 12 20:57:14.490334 master-0 kubenswrapper[7484]: I0312 20:57:14.490324 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 20:57:14.490599 master-0 kubenswrapper[7484]: I0312 20:57:14.490580 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 20:57:14.490789 master-0 kubenswrapper[7484]: I0312 20:57:14.490766 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 20:57:14.496766 master-0 kubenswrapper[7484]: I0312 20:57:14.496716 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl"] Mar 12 20:57:14.553058 master-0 kubenswrapper[7484]: I0312 20:57:14.552993 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.553058 master-0 kubenswrapper[7484]: I0312 20:57:14.553059 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.553413 master-0 kubenswrapper[7484]: I0312 20:57:14.553091 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2sm\" (UniqueName: \"kubernetes.io/projected/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-kube-api-access-4l2sm\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.553413 master-0 kubenswrapper[7484]: I0312 20:57:14.553197 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.653041 master-0 kubenswrapper[7484]: I0312 20:57:14.652935 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mz2sr" event={"ID":"a5d6705e-e564-4774-94b4-ef11956c67b2","Type":"ContainerStarted","Data":"d341986c36af71608be8e2059730f18e324f6f7730f011c71151533f94e6d7b6"} Mar 12 20:57:14.653041 master-0 kubenswrapper[7484]: I0312 20:57:14.653033 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mz2sr" event={"ID":"a5d6705e-e564-4774-94b4-ef11956c67b2","Type":"ContainerStarted","Data":"bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf"} Mar 12 20:57:14.654480 master-0 kubenswrapper[7484]: I0312 20:57:14.654431 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.654570 master-0 kubenswrapper[7484]: I0312 20:57:14.654522 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.654637 master-0 kubenswrapper[7484]: I0312 20:57:14.654577 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l2sm\" (UniqueName: \"kubernetes.io/projected/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-kube-api-access-4l2sm\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.654680 master-0 kubenswrapper[7484]: I0312 20:57:14.654619 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.654959 master-0 kubenswrapper[7484]: E0312 20:57:14.654865 7484 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 12 20:57:14.656948 master-0 kubenswrapper[7484]: E0312 20:57:14.655397 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls podName:ea339fe1-c013-4c4b-90c9-aaaa7eb40d99 nodeName:}" failed. No retries permitted until 2026-03-12 20:57:15.155367979 +0000 UTC m=+447.640636781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-8fpdl" (UID: "ea339fe1-c013-4c4b-90c9-aaaa7eb40d99") : secret "prometheus-operator-tls" not found Mar 12 20:57:14.656948 master-0 kubenswrapper[7484]: I0312 20:57:14.655637 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.669470 master-0 kubenswrapper[7484]: I0312 20:57:14.669341 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.682742 master-0 kubenswrapper[7484]: I0312 20:57:14.679726 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l2sm\" (UniqueName: \"kubernetes.io/projected/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-kube-api-access-4l2sm\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:14.690683 master-0 kubenswrapper[7484]: I0312 20:57:14.690625 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/setup/0.log" Mar 12 20:57:14.878985 master-0 kubenswrapper[7484]: I0312 20:57:14.878926 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-ensure-env-vars/0.log" Mar 12 20:57:15.081638 master-0 kubenswrapper[7484]: I0312 20:57:15.081576 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-resources-copy/0.log" Mar 12 20:57:15.160956 master-0 kubenswrapper[7484]: I0312 20:57:15.160896 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:15.165336 master-0 kubenswrapper[7484]: I0312 20:57:15.165293 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:15.285780 master-0 kubenswrapper[7484]: I0312 20:57:15.285705 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 20:57:15.415293 master-0 kubenswrapper[7484]: I0312 20:57:15.415146 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 20:57:15.420514 master-0 kubenswrapper[7484]: I0312 20:57:15.420411 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:15.420514 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:15.420514 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:15.420514 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:15.420845 master-0 kubenswrapper[7484]: I0312 20:57:15.420520 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:15.482669 master-0 kubenswrapper[7484]: I0312 20:57:15.482617 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 20:57:15.687197 master-0 kubenswrapper[7484]: I0312 20:57:15.686985 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 20:57:15.834624 master-0 kubenswrapper[7484]: I0312 20:57:15.834517 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mz2sr" podStartSLOduration=2.834479537 podStartE2EDuration="2.834479537s" podCreationTimestamp="2026-03-12 20:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 20:57:14.674189334 +0000 UTC m=+447.159458146" watchObservedRunningTime="2026-03-12 20:57:15.834479537 +0000 UTC m=+448.319748379" Mar 12 20:57:15.836098 master-0 kubenswrapper[7484]: I0312 20:57:15.836024 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl"] Mar 12 20:57:15.836892 master-0 kubenswrapper[7484]: W0312 20:57:15.836835 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea339fe1_c013_4c4b_90c9_aaaa7eb40d99.slice/crio-bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9 WatchSource:0}: Error finding container bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9: Status 404 returned error can't find the container with id bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9 Mar 12 20:57:15.881637 master-0 kubenswrapper[7484]: I0312 20:57:15.881570 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-readyz/0.log" Mar 12 20:57:16.079721 master-0 kubenswrapper[7484]: I0312 20:57:16.079645 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 20:57:16.288213 master-0 kubenswrapper[7484]: I0312 20:57:16.288134 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_4d69687f-b8a5-4643-8268-ce30df5db3bc/installer/0.log" Mar 12 20:57:16.415986 master-0 kubenswrapper[7484]: I0312 20:57:16.415721 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:16.415986 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:16.415986 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:16.415986 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:16.415986 master-0 kubenswrapper[7484]: I0312 20:57:16.415932 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:16.487552 master-0 kubenswrapper[7484]: I0312 20:57:16.487462 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-56nzk_784599a3-a2ac-46ac-a4b7-9439704646cc/kube-apiserver-operator/0.log" Mar 12 20:57:16.669617 master-0 kubenswrapper[7484]: I0312 20:57:16.669420 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" event={"ID":"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99","Type":"ContainerStarted","Data":"bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9"} Mar 12 20:57:16.680971 master-0 kubenswrapper[7484]: I0312 20:57:16.680842 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-56nzk_784599a3-a2ac-46ac-a4b7-9439704646cc/kube-apiserver-operator/1.log" Mar 12 20:57:16.879676 master-0 kubenswrapper[7484]: I0312 20:57:16.879539 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 12 20:57:17.087824 master-0 kubenswrapper[7484]: I0312 20:57:17.087770 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 12 20:57:17.279178 master-0 kubenswrapper[7484]: I0312 20:57:17.279130 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 12 20:57:17.414892 master-0 kubenswrapper[7484]: I0312 20:57:17.414733 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:17.414892 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:17.414892 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:17.414892 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:17.414892 master-0 kubenswrapper[7484]: I0312 20:57:17.414868 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:17.489572 master-0 kubenswrapper[7484]: I0312 20:57:17.489517 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_869e3d2a-1b5c-426f-945a-ddd44a9a5033/installer/0.log" Mar 12 20:57:17.677701 master-0 kubenswrapper[7484]: I0312 20:57:17.677486 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" event={"ID":"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99","Type":"ContainerStarted","Data":"503b8d46e3972cc7cb43c54a36f8be73ce30148645fe76fb6ed57009b7fd738b"} Mar 12 20:57:17.686914 master-0 kubenswrapper[7484]: I0312 20:57:17.686227 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_367123ca-5a21-415c-8ac2-6d875696536b/installer/0.log" Mar 12 20:57:17.882293 master-0 kubenswrapper[7484]: I0312 20:57:17.882228 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/kube-controller-manager/0.log" Mar 12 20:57:18.090669 master-0 kubenswrapper[7484]: I0312 20:57:18.090550 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/cluster-policy-controller/0.log" Mar 12 20:57:18.283790 master-0 kubenswrapper[7484]: I0312 20:57:18.283532 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/kube-controller-manager-cert-syncer/0.log" Mar 12 20:57:18.414774 master-0 kubenswrapper[7484]: I0312 20:57:18.414590 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:18.414774 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:18.414774 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:18.414774 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:18.414774 master-0 kubenswrapper[7484]: I0312 20:57:18.414696 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:18.482282 master-0 kubenswrapper[7484]: I0312 20:57:18.482183 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/kube-controller-manager-recovery-controller/0.log" Mar 12 20:57:18.689747 master-0 kubenswrapper[7484]: I0312 20:57:18.689550 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-f2kg4_96bd86df-2101-47f5-844b-1332261c66f1/kube-controller-manager-operator/0.log" Mar 12 20:57:18.693491 master-0 kubenswrapper[7484]: I0312 20:57:18.693424 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" event={"ID":"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99","Type":"ContainerStarted","Data":"833dbdab99f85e1d66629c5379755ff339d4e0e1499b71fc7997688f9cbf31b6"} Mar 12 20:57:18.731214 master-0 kubenswrapper[7484]: I0312 20:57:18.731100 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" podStartSLOduration=3.149731449 podStartE2EDuration="4.731073045s" podCreationTimestamp="2026-03-12 20:57:14 +0000 UTC" firstStartedPulling="2026-03-12 20:57:15.839583602 +0000 UTC m=+448.324852434" lastFinishedPulling="2026-03-12 20:57:17.420925218 +0000 UTC m=+449.906194030" observedRunningTime="2026-03-12 20:57:18.726022121 +0000 UTC m=+451.211290963" watchObservedRunningTime="2026-03-12 20:57:18.731073045 +0000 UTC m=+451.216341877" Mar 12 20:57:18.888961 master-0 kubenswrapper[7484]: I0312 20:57:18.888863 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-f2kg4_96bd86df-2101-47f5-844b-1332261c66f1/kube-controller-manager-operator/1.log" Mar 12 20:57:19.087345 master-0 kubenswrapper[7484]: I0312 20:57:19.087264 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 12 20:57:19.289336 master-0 kubenswrapper[7484]: I0312 20:57:19.289284 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 12 20:57:19.415528 master-0 kubenswrapper[7484]: I0312 20:57:19.415309 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:19.415528 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:19.415528 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:19.415528 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:19.415528 master-0 kubenswrapper[7484]: I0312 20:57:19.415397 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:19.484050 master-0 kubenswrapper[7484]: I0312 20:57:19.483961 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_954fe7f9-e138-49ab-ab8e-504b75914100/installer/0.log" Mar 12 20:57:19.686798 master-0 kubenswrapper[7484]: I0312 20:57:19.686576 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-269gt_4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/kube-scheduler-operator-container/0.log" Mar 12 20:57:19.881859 master-0 kubenswrapper[7484]: I0312 20:57:19.881776 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-269gt_4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/kube-scheduler-operator-container/1.log" Mar 12 20:57:20.080787 master-0 kubenswrapper[7484]: I0312 20:57:20.080748 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-jwthf_15ebfbd8-0782-431a-88a3-83af328498d2/openshift-apiserver-operator/1.log" Mar 12 20:57:20.282264 master-0 kubenswrapper[7484]: I0312 20:57:20.282185 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-jwthf_15ebfbd8-0782-431a-88a3-83af328498d2/openshift-apiserver-operator/2.log" Mar 12 20:57:20.411797 master-0 kubenswrapper[7484]: I0312 20:57:20.411624 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:57:20.414240 master-0 kubenswrapper[7484]: I0312 20:57:20.414177 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:20.414240 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:20.414240 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:20.414240 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:20.414366 master-0 kubenswrapper[7484]: I0312 20:57:20.414262 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:20.484466 master-0 kubenswrapper[7484]: I0312 20:57:20.484404 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-84fb785f4-kl52q_70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/fix-audit-permissions/0.log" Mar 12 20:57:20.682543 master-0 kubenswrapper[7484]: I0312 20:57:20.682429 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-84fb785f4-kl52q_70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/openshift-apiserver/0.log" Mar 12 20:57:20.874314 master-0 kubenswrapper[7484]: I0312 20:57:20.873625 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf"] Mar 12 20:57:20.874970 master-0 kubenswrapper[7484]: I0312 20:57:20.874827 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:20.879185 master-0 kubenswrapper[7484]: I0312 20:57:20.879139 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 20:57:20.879604 master-0 kubenswrapper[7484]: I0312 20:57:20.879569 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mc5vw" Mar 12 20:57:20.881899 master-0 kubenswrapper[7484]: I0312 20:57:20.879419 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 20:57:20.882385 master-0 kubenswrapper[7484]: I0312 20:57:20.882101 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-lkmd7"] Mar 12 20:57:20.887956 master-0 kubenswrapper[7484]: I0312 20:57:20.887900 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.892268 master-0 kubenswrapper[7484]: I0312 20:57:20.890879 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xgssr" Mar 12 20:57:20.892268 master-0 kubenswrapper[7484]: I0312 20:57:20.891064 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 20:57:20.892268 master-0 kubenswrapper[7484]: I0312 20:57:20.891173 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 20:57:20.900712 master-0 kubenswrapper[7484]: I0312 20:57:20.900422 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf"] Mar 12 20:57:20.938673 master-0 kubenswrapper[7484]: I0312 20:57:20.938546 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-84fb785f4-kl52q_70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/openshift-apiserver-check-endpoints/0.log" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956397 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956458 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956495 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-node-exporter-wtmp\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956535 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956567 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956598 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-sys\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956633 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp84p\" (UniqueName: \"kubernetes.io/projected/7667a111-e744-47b2-9603-3864347dc738-kube-api-access-mp84p\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956657 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956683 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-root\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956707 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hvwg\" (UniqueName: \"kubernetes.io/projected/ed1c4da2-564b-4354-a4ec-27b801098aa5-kube-api-access-2hvwg\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956746 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7667a111-e744-47b2-9603-3864347dc738-node-exporter-textfile\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:20.957193 master-0 kubenswrapper[7484]: I0312 20:57:20.956772 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:20.960870 master-0 kubenswrapper[7484]: I0312 20:57:20.960053 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr"] Mar 12 20:57:20.965703 master-0 kubenswrapper[7484]: I0312 20:57:20.961674 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:20.966211 master-0 kubenswrapper[7484]: I0312 20:57:20.966176 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vr86d" Mar 12 20:57:20.966399 master-0 kubenswrapper[7484]: I0312 20:57:20.966371 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 20:57:20.966604 master-0 kubenswrapper[7484]: I0312 20:57:20.966213 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 20:57:20.966997 master-0 kubenswrapper[7484]: I0312 20:57:20.966970 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 20:57:21.013073 master-0 kubenswrapper[7484]: I0312 20:57:21.012982 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr"] Mar 12 20:57:21.057833 master-0 kubenswrapper[7484]: I0312 20:57:21.057745 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp84p\" (UniqueName: \"kubernetes.io/projected/7667a111-e744-47b2-9603-3864347dc738-kube-api-access-mp84p\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.057833 master-0 kubenswrapper[7484]: I0312 20:57:21.057800 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.057856 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.057895 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-root\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.057925 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hvwg\" (UniqueName: \"kubernetes.io/projected/ed1c4da2-564b-4354-a4ec-27b801098aa5-kube-api-access-2hvwg\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.057949 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.057971 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7667a111-e744-47b2-9603-3864347dc738-node-exporter-textfile\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.057988 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gg7v\" (UniqueName: \"kubernetes.io/projected/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-api-access-7gg7v\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.058007 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.058040 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.058060 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.058071 master-0 kubenswrapper[7484]: I0312 20:57:21.058079 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058101 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-node-exporter-wtmp\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058128 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058149 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058168 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058188 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058228 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-sys\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.058381 master-0 kubenswrapper[7484]: I0312 20:57:21.058310 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-sys\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.059276 master-0 kubenswrapper[7484]: E0312 20:57:21.059249 7484 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 12 20:57:21.059343 master-0 kubenswrapper[7484]: I0312 20:57:21.059250 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-root\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.059412 master-0 kubenswrapper[7484]: I0312 20:57:21.059376 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.059481 master-0 kubenswrapper[7484]: I0312 20:57:21.059432 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.059600 master-0 kubenswrapper[7484]: E0312 20:57:21.059587 7484 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls podName:ed1c4da2-564b-4354-a4ec-27b801098aa5 nodeName:}" failed. No retries permitted until 2026-03-12 20:57:21.559570075 +0000 UTC m=+454.044838877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-bdmlf" (UID: "ed1c4da2-564b-4354-a4ec-27b801098aa5") : secret "openshift-state-metrics-tls" not found Mar 12 20:57:21.059684 master-0 kubenswrapper[7484]: I0312 20:57:21.059660 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7667a111-e744-47b2-9603-3864347dc738-node-exporter-textfile\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.059779 master-0 kubenswrapper[7484]: I0312 20:57:21.059741 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-node-exporter-wtmp\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.062704 master-0 kubenswrapper[7484]: I0312 20:57:21.062663 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.062773 master-0 kubenswrapper[7484]: I0312 20:57:21.062676 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.065257 master-0 kubenswrapper[7484]: I0312 20:57:21.065220 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.082866 master-0 kubenswrapper[7484]: I0312 20:57:21.082502 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hvwg\" (UniqueName: \"kubernetes.io/projected/ed1c4da2-564b-4354-a4ec-27b801098aa5-kube-api-access-2hvwg\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.084055 master-0 kubenswrapper[7484]: I0312 20:57:21.084011 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-xh6r9_5471994f-769e-4124-b7d0-01f5358fc18f/etcd-operator/0.log" Mar 12 20:57:21.084749 master-0 kubenswrapper[7484]: I0312 20:57:21.084580 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp84p\" (UniqueName: \"kubernetes.io/projected/7667a111-e744-47b2-9603-3864347dc738-kube-api-access-mp84p\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.159180 master-0 kubenswrapper[7484]: I0312 20:57:21.159116 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.159180 master-0 kubenswrapper[7484]: I0312 20:57:21.159180 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.159498 master-0 kubenswrapper[7484]: I0312 20:57:21.159220 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.159883 master-0 kubenswrapper[7484]: I0312 20:57:21.159841 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.159947 master-0 kubenswrapper[7484]: I0312 20:57:21.159915 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.159947 master-0 kubenswrapper[7484]: I0312 20:57:21.159943 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gg7v\" (UniqueName: \"kubernetes.io/projected/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-api-access-7gg7v\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.160105 master-0 kubenswrapper[7484]: I0312 20:57:21.160074 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.160531 master-0 kubenswrapper[7484]: I0312 20:57:21.160501 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.160923 master-0 kubenswrapper[7484]: I0312 20:57:21.160883 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.162393 master-0 kubenswrapper[7484]: I0312 20:57:21.162372 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.164487 master-0 kubenswrapper[7484]: I0312 20:57:21.164461 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.179330 master-0 kubenswrapper[7484]: I0312 20:57:21.179292 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gg7v\" (UniqueName: \"kubernetes.io/projected/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-api-access-7gg7v\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.264630 master-0 kubenswrapper[7484]: I0312 20:57:21.264468 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 20:57:21.278842 master-0 kubenswrapper[7484]: I0312 20:57:21.278739 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-xh6r9_5471994f-769e-4124-b7d0-01f5358fc18f/etcd-operator/1.log" Mar 12 20:57:21.305622 master-0 kubenswrapper[7484]: I0312 20:57:21.305564 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 20:57:21.414875 master-0 kubenswrapper[7484]: I0312 20:57:21.414376 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:21.414875 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:21.414875 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:21.414875 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:21.414875 master-0 kubenswrapper[7484]: I0312 20:57:21.414606 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:21.483592 master-0 kubenswrapper[7484]: I0312 20:57:21.482916 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-tpvl4_98d99166-c42a-4169-87e8-4209570aec50/catalog-operator/0.log" Mar 12 20:57:21.566590 master-0 kubenswrapper[7484]: I0312 20:57:21.566495 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.570951 master-0 kubenswrapper[7484]: I0312 20:57:21.570908 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.688049 master-0 kubenswrapper[7484]: I0312 20:57:21.687992 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-q9hnk_07330030-487d-4fa6-b5c3-67607355bbba/olm-operator/0.log" Mar 12 20:57:21.719529 master-0 kubenswrapper[7484]: I0312 20:57:21.719438 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lkmd7" event={"ID":"7667a111-e744-47b2-9603-3864347dc738","Type":"ContainerStarted","Data":"d50dfd713474f3f9326230f15b9aa86b517e198f4cbc3bcfca21ce09a517313c"} Mar 12 20:57:21.748502 master-0 kubenswrapper[7484]: I0312 20:57:21.748433 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr"] Mar 12 20:57:21.762423 master-0 kubenswrapper[7484]: W0312 20:57:21.762362 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ebc9ee1_3913_4112_bb3f_c79f2c08032b.slice/crio-ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402 WatchSource:0}: Error finding container ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402: Status 404 returned error can't find the container with id ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402 Mar 12 20:57:21.798985 master-0 kubenswrapper[7484]: I0312 20:57:21.798881 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 20:57:21.889250 master-0 kubenswrapper[7484]: I0312 20:57:21.889148 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-cdcc8_54184647-6e9a-43f7-90b1-5d8815f8b1ab/kube-rbac-proxy/0.log" Mar 12 20:57:22.081456 master-0 kubenswrapper[7484]: I0312 20:57:22.081313 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-cdcc8_54184647-6e9a-43f7-90b1-5d8815f8b1ab/package-server-manager/0.log" Mar 12 20:57:22.285726 master-0 kubenswrapper[7484]: I0312 20:57:22.285615 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-659d778978-djtms_067fdca7-c61d-470c-8421-73e0b62df3e4/packageserver/0.log" Mar 12 20:57:22.293017 master-0 kubenswrapper[7484]: I0312 20:57:22.292938 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf"] Mar 12 20:57:22.329131 master-0 kubenswrapper[7484]: W0312 20:57:22.328888 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded1c4da2_564b_4354_a4ec_27b801098aa5.slice/crio-6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199 WatchSource:0}: Error finding container 6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199: Status 404 returned error can't find the container with id 6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199 Mar 12 20:57:22.413350 master-0 kubenswrapper[7484]: I0312 20:57:22.413280 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:22.413350 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:22.413350 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:22.413350 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:22.413350 master-0 kubenswrapper[7484]: I0312 20:57:22.413344 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:22.728061 master-0 kubenswrapper[7484]: I0312 20:57:22.727982 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" event={"ID":"4ebc9ee1-3913-4112-bb3f-c79f2c08032b","Type":"ContainerStarted","Data":"ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402"} Mar 12 20:57:22.729857 master-0 kubenswrapper[7484]: I0312 20:57:22.729769 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" event={"ID":"ed1c4da2-564b-4354-a4ec-27b801098aa5","Type":"ContainerStarted","Data":"318c36be7557102e1c56b7f0c917e27048f072858a5123c4e1f5ba1c100fb35c"} Mar 12 20:57:22.729857 master-0 kubenswrapper[7484]: I0312 20:57:22.729847 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" event={"ID":"ed1c4da2-564b-4354-a4ec-27b801098aa5","Type":"ContainerStarted","Data":"4f7db80ef730996c3eec78a1588e98fe12933ddec783577806315dc81fc72e84"} Mar 12 20:57:22.730047 master-0 kubenswrapper[7484]: I0312 20:57:22.729863 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" event={"ID":"ed1c4da2-564b-4354-a4ec-27b801098aa5","Type":"ContainerStarted","Data":"6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199"} Mar 12 20:57:22.731734 master-0 kubenswrapper[7484]: I0312 20:57:22.731697 7484 generic.go:334] "Generic (PLEG): container finished" podID="7667a111-e744-47b2-9603-3864347dc738" containerID="4ae9acc07c3f6ce3eca66b7339a23374d2c3e5674298f965efd90da0b1f1e7df" exitCode=0 Mar 12 20:57:22.731734 master-0 kubenswrapper[7484]: I0312 20:57:22.731726 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lkmd7" event={"ID":"7667a111-e744-47b2-9603-3864347dc738","Type":"ContainerDied","Data":"4ae9acc07c3f6ce3eca66b7339a23374d2c3e5674298f965efd90da0b1f1e7df"} Mar 12 20:57:23.415912 master-0 kubenswrapper[7484]: I0312 20:57:23.415587 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:23.415912 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:23.415912 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:23.415912 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:23.415912 master-0 kubenswrapper[7484]: I0312 20:57:23.415671 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:23.747144 master-0 kubenswrapper[7484]: I0312 20:57:23.746978 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lkmd7" event={"ID":"7667a111-e744-47b2-9603-3864347dc738","Type":"ContainerStarted","Data":"6b8799a7bbfcb8a90b989e8e44064611a56d2eb2ff3f71972648a3a08914ccc2"} Mar 12 20:57:23.747144 master-0 kubenswrapper[7484]: I0312 20:57:23.747017 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-lkmd7" event={"ID":"7667a111-e744-47b2-9603-3864347dc738","Type":"ContainerStarted","Data":"f0782267b7aac9264560bd9435560d2c9d81a6164403c368ef511610ac16cdd3"} Mar 12 20:57:23.747144 master-0 kubenswrapper[7484]: I0312 20:57:23.747028 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" event={"ID":"4ebc9ee1-3913-4112-bb3f-c79f2c08032b","Type":"ContainerStarted","Data":"41d7f4448f629bbfbbd5efdb21e18b1e708a16d4b0b65480e52a532b2dafffe7"} Mar 12 20:57:23.747144 master-0 kubenswrapper[7484]: I0312 20:57:23.747037 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" event={"ID":"4ebc9ee1-3913-4112-bb3f-c79f2c08032b","Type":"ContainerStarted","Data":"8e3de43df68def1dd93ca63e703f27f28081b6e6444ff8acebff1c44375446ca"} Mar 12 20:57:23.747144 master-0 kubenswrapper[7484]: I0312 20:57:23.747046 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" event={"ID":"4ebc9ee1-3913-4112-bb3f-c79f2c08032b","Type":"ContainerStarted","Data":"106d9a27d50e66fd20a875f0f6f7dd9860d2ddbdd69b090a2e5c2db38ba8ef3b"} Mar 12 20:57:23.762307 master-0 kubenswrapper[7484]: I0312 20:57:23.762226 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-lkmd7" podStartSLOduration=2.7235407499999997 podStartE2EDuration="3.762202412s" podCreationTimestamp="2026-03-12 20:57:20 +0000 UTC" firstStartedPulling="2026-03-12 20:57:21.29775462 +0000 UTC m=+453.783023422" lastFinishedPulling="2026-03-12 20:57:22.336416282 +0000 UTC m=+454.821685084" observedRunningTime="2026-03-12 20:57:23.761849833 +0000 UTC m=+456.247118645" watchObservedRunningTime="2026-03-12 20:57:23.762202412 +0000 UTC m=+456.247471214" Mar 12 20:57:23.788954 master-0 kubenswrapper[7484]: I0312 20:57:23.788763 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" podStartSLOduration=2.413104714 podStartE2EDuration="3.788743947s" podCreationTimestamp="2026-03-12 20:57:20 +0000 UTC" firstStartedPulling="2026-03-12 20:57:21.765272848 +0000 UTC m=+454.250541650" lastFinishedPulling="2026-03-12 20:57:23.140912071 +0000 UTC m=+455.626180883" observedRunningTime="2026-03-12 20:57:23.788648994 +0000 UTC m=+456.273917796" watchObservedRunningTime="2026-03-12 20:57:23.788743947 +0000 UTC m=+456.274012749" Mar 12 20:57:24.415933 master-0 kubenswrapper[7484]: I0312 20:57:24.415829 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:24.415933 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:24.415933 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:24.415933 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:24.417919 master-0 kubenswrapper[7484]: I0312 20:57:24.415998 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:24.757615 master-0 kubenswrapper[7484]: I0312 20:57:24.757433 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" event={"ID":"ed1c4da2-564b-4354-a4ec-27b801098aa5","Type":"ContainerStarted","Data":"e839329619c32cdd742ab952a0cc71a51095d4e53a488a1c01634e0456d695a7"} Mar 12 20:57:24.799260 master-0 kubenswrapper[7484]: I0312 20:57:24.796428 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" podStartSLOduration=3.433202938 podStartE2EDuration="4.796387964s" podCreationTimestamp="2026-03-12 20:57:20 +0000 UTC" firstStartedPulling="2026-03-12 20:57:22.643191717 +0000 UTC m=+455.128460509" lastFinishedPulling="2026-03-12 20:57:24.006376723 +0000 UTC m=+456.491645535" observedRunningTime="2026-03-12 20:57:24.794091888 +0000 UTC m=+457.279360730" watchObservedRunningTime="2026-03-12 20:57:24.796387964 +0000 UTC m=+457.281656836" Mar 12 20:57:25.417795 master-0 kubenswrapper[7484]: I0312 20:57:25.415709 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:25.417795 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:25.417795 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:25.417795 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:25.417795 master-0 kubenswrapper[7484]: I0312 20:57:25.415774 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:26.414309 master-0 kubenswrapper[7484]: I0312 20:57:26.414244 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:26.414309 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:26.414309 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:26.414309 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:26.415226 master-0 kubenswrapper[7484]: I0312 20:57:26.415137 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:27.414449 master-0 kubenswrapper[7484]: I0312 20:57:27.414341 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:27.414449 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:27.414449 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:27.414449 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:27.414449 master-0 kubenswrapper[7484]: I0312 20:57:27.414430 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:28.414778 master-0 kubenswrapper[7484]: I0312 20:57:28.414680 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:28.414778 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:28.414778 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:28.414778 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:28.415941 master-0 kubenswrapper[7484]: I0312 20:57:28.414784 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:29.037590 master-0 kubenswrapper[7484]: I0312 20:57:29.037512 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5bbfd655db-2tsb8"] Mar 12 20:57:29.039275 master-0 kubenswrapper[7484]: I0312 20:57:29.039222 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.045111 master-0 kubenswrapper[7484]: I0312 20:57:29.045060 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 20:57:29.045111 master-0 kubenswrapper[7484]: I0312 20:57:29.045088 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-p5qt4" Mar 12 20:57:29.046143 master-0 kubenswrapper[7484]: I0312 20:57:29.045959 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 20:57:29.046143 master-0 kubenswrapper[7484]: I0312 20:57:29.045981 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 20:57:29.046143 master-0 kubenswrapper[7484]: I0312 20:57:29.046078 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 20:57:29.046754 master-0 kubenswrapper[7484]: I0312 20:57:29.046707 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4jamj9cd05on6" Mar 12 20:57:29.087715 master-0 kubenswrapper[7484]: I0312 20:57:29.087659 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5bbfd655db-2tsb8"] Mar 12 20:57:29.091346 master-0 kubenswrapper[7484]: I0312 20:57:29.091294 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.091490 master-0 kubenswrapper[7484]: I0312 20:57:29.091360 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.091490 master-0 kubenswrapper[7484]: I0312 20:57:29.091383 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.091490 master-0 kubenswrapper[7484]: I0312 20:57:29.091404 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.091490 master-0 kubenswrapper[7484]: I0312 20:57:29.091424 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.091490 master-0 kubenswrapper[7484]: I0312 20:57:29.091457 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.091490 master-0 kubenswrapper[7484]: I0312 20:57:29.091488 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193122 master-0 kubenswrapper[7484]: I0312 20:57:29.193020 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193356 master-0 kubenswrapper[7484]: I0312 20:57:29.193300 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193656 master-0 kubenswrapper[7484]: I0312 20:57:29.193620 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193736 master-0 kubenswrapper[7484]: I0312 20:57:29.193658 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193736 master-0 kubenswrapper[7484]: I0312 20:57:29.193683 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193820 master-0 kubenswrapper[7484]: I0312 20:57:29.193788 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193870 master-0 kubenswrapper[7484]: I0312 20:57:29.193853 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.193964 master-0 kubenswrapper[7484]: I0312 20:57:29.193947 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.194303 master-0 kubenswrapper[7484]: I0312 20:57:29.194219 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.195022 master-0 kubenswrapper[7484]: I0312 20:57:29.194975 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.197063 master-0 kubenswrapper[7484]: I0312 20:57:29.197023 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.200348 master-0 kubenswrapper[7484]: I0312 20:57:29.198784 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.206154 master-0 kubenswrapper[7484]: I0312 20:57:29.206115 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.212424 master-0 kubenswrapper[7484]: I0312 20:57:29.212381 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.360167 master-0 kubenswrapper[7484]: I0312 20:57:29.360047 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:29.415323 master-0 kubenswrapper[7484]: I0312 20:57:29.415246 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:29.415323 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:29.415323 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:29.415323 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:29.416098 master-0 kubenswrapper[7484]: I0312 20:57:29.415330 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:29.868854 master-0 kubenswrapper[7484]: I0312 20:57:29.868752 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5bbfd655db-2tsb8"] Mar 12 20:57:29.876001 master-0 kubenswrapper[7484]: W0312 20:57:29.875939 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33beea0b_f77b_4388_a9c8_5710f084f961.slice/crio-c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa WatchSource:0}: Error finding container c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa: Status 404 returned error can't find the container with id c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa Mar 12 20:57:30.414784 master-0 kubenswrapper[7484]: I0312 20:57:30.414686 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:30.414784 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:30.414784 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:30.414784 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:30.415277 master-0 kubenswrapper[7484]: I0312 20:57:30.414798 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:30.800102 master-0 kubenswrapper[7484]: I0312 20:57:30.799913 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" event={"ID":"33beea0b-f77b-4388-a9c8-5710f084f961","Type":"ContainerStarted","Data":"c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa"} Mar 12 20:57:31.414057 master-0 kubenswrapper[7484]: I0312 20:57:31.413632 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:31.414057 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:31.414057 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:31.414057 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:31.414057 master-0 kubenswrapper[7484]: I0312 20:57:31.413729 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:32.414901 master-0 kubenswrapper[7484]: I0312 20:57:32.414784 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:32.414901 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:32.414901 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:32.414901 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:32.415913 master-0 kubenswrapper[7484]: I0312 20:57:32.414927 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:32.819288 master-0 kubenswrapper[7484]: I0312 20:57:32.819206 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" event={"ID":"33beea0b-f77b-4388-a9c8-5710f084f961","Type":"ContainerStarted","Data":"41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a"} Mar 12 20:57:32.841879 master-0 kubenswrapper[7484]: I0312 20:57:32.841692 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" podStartSLOduration=1.889124243 podStartE2EDuration="3.841656804s" podCreationTimestamp="2026-03-12 20:57:29 +0000 UTC" firstStartedPulling="2026-03-12 20:57:29.879230937 +0000 UTC m=+462.364499739" lastFinishedPulling="2026-03-12 20:57:31.831763498 +0000 UTC m=+464.317032300" observedRunningTime="2026-03-12 20:57:32.838090748 +0000 UTC m=+465.323359600" watchObservedRunningTime="2026-03-12 20:57:32.841656804 +0000 UTC m=+465.326925656" Mar 12 20:57:33.413932 master-0 kubenswrapper[7484]: I0312 20:57:33.413857 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:33.413932 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:33.413932 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:33.413932 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:33.414732 master-0 kubenswrapper[7484]: I0312 20:57:33.413969 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:34.415872 master-0 kubenswrapper[7484]: I0312 20:57:34.415711 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:34.415872 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:34.415872 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:34.415872 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:34.416647 master-0 kubenswrapper[7484]: I0312 20:57:34.416615 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:35.416430 master-0 kubenswrapper[7484]: I0312 20:57:35.416345 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:35.416430 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:35.416430 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:35.416430 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:35.417689 master-0 kubenswrapper[7484]: I0312 20:57:35.417638 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:36.414166 master-0 kubenswrapper[7484]: I0312 20:57:36.414111 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:36.414166 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:36.414166 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:36.414166 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:36.414586 master-0 kubenswrapper[7484]: I0312 20:57:36.414173 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:37.412863 master-0 kubenswrapper[7484]: I0312 20:57:37.412794 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:37.412863 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:37.412863 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:37.412863 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:37.412863 master-0 kubenswrapper[7484]: I0312 20:57:37.412866 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:38.415284 master-0 kubenswrapper[7484]: I0312 20:57:38.415173 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:38.415284 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:38.415284 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:38.415284 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:38.417872 master-0 kubenswrapper[7484]: I0312 20:57:38.415294 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:39.413972 master-0 kubenswrapper[7484]: I0312 20:57:39.413875 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:39.413972 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:39.413972 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:39.413972 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:39.414399 master-0 kubenswrapper[7484]: I0312 20:57:39.413977 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:40.415061 master-0 kubenswrapper[7484]: I0312 20:57:40.415017 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:40.415061 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:40.415061 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:40.415061 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:40.416597 master-0 kubenswrapper[7484]: I0312 20:57:40.416536 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:41.413917 master-0 kubenswrapper[7484]: I0312 20:57:41.413836 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:41.413917 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:41.413917 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:41.413917 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:41.413917 master-0 kubenswrapper[7484]: I0312 20:57:41.413904 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:42.417015 master-0 kubenswrapper[7484]: I0312 20:57:42.416915 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:42.417015 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:42.417015 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:42.417015 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:42.418068 master-0 kubenswrapper[7484]: I0312 20:57:42.417025 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:43.414998 master-0 kubenswrapper[7484]: I0312 20:57:43.414872 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:43.414998 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:43.414998 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:43.414998 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:43.415523 master-0 kubenswrapper[7484]: I0312 20:57:43.415029 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:44.416388 master-0 kubenswrapper[7484]: I0312 20:57:44.416320 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:44.416388 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:44.416388 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:44.416388 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:44.417044 master-0 kubenswrapper[7484]: I0312 20:57:44.416423 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:45.415206 master-0 kubenswrapper[7484]: I0312 20:57:45.415102 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:45.415206 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:45.415206 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:45.415206 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:45.415536 master-0 kubenswrapper[7484]: I0312 20:57:45.415256 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:46.414795 master-0 kubenswrapper[7484]: I0312 20:57:46.414686 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:46.414795 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:46.414795 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:46.414795 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:46.418465 master-0 kubenswrapper[7484]: I0312 20:57:46.414838 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:47.414257 master-0 kubenswrapper[7484]: I0312 20:57:47.414194 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:47.414257 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:47.414257 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:47.414257 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:47.414599 master-0 kubenswrapper[7484]: I0312 20:57:47.414291 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:48.415648 master-0 kubenswrapper[7484]: I0312 20:57:48.415549 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:48.415648 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:48.415648 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:48.415648 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:48.416752 master-0 kubenswrapper[7484]: I0312 20:57:48.415645 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:49.361250 master-0 kubenswrapper[7484]: I0312 20:57:49.361143 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:49.361250 master-0 kubenswrapper[7484]: I0312 20:57:49.361248 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:57:49.414192 master-0 kubenswrapper[7484]: I0312 20:57:49.414120 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:49.414192 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:49.414192 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:49.414192 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:49.414750 master-0 kubenswrapper[7484]: I0312 20:57:49.414205 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:50.413876 master-0 kubenswrapper[7484]: I0312 20:57:50.413792 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:50.413876 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:50.413876 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:50.413876 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:50.413876 master-0 kubenswrapper[7484]: I0312 20:57:50.413881 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:51.415670 master-0 kubenswrapper[7484]: I0312 20:57:51.415554 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:51.415670 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:51.415670 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:51.415670 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:51.415670 master-0 kubenswrapper[7484]: I0312 20:57:51.415644 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:52.414897 master-0 kubenswrapper[7484]: I0312 20:57:52.414231 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:52.414897 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:52.414897 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:52.414897 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:52.414897 master-0 kubenswrapper[7484]: I0312 20:57:52.414387 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:53.416026 master-0 kubenswrapper[7484]: I0312 20:57:53.415965 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:53.416026 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:53.416026 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:53.416026 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:53.416026 master-0 kubenswrapper[7484]: I0312 20:57:53.416043 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:54.414416 master-0 kubenswrapper[7484]: I0312 20:57:54.414314 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:54.414416 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:54.414416 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:54.414416 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:54.415005 master-0 kubenswrapper[7484]: I0312 20:57:54.414443 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:55.415932 master-0 kubenswrapper[7484]: I0312 20:57:55.415362 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:55.415932 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:55.415932 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:55.415932 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:55.415932 master-0 kubenswrapper[7484]: I0312 20:57:55.415487 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:56.414295 master-0 kubenswrapper[7484]: I0312 20:57:56.414201 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:56.414295 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:56.414295 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:56.414295 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:56.414800 master-0 kubenswrapper[7484]: I0312 20:57:56.414318 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:57.415511 master-0 kubenswrapper[7484]: I0312 20:57:57.415422 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:57.415511 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:57.415511 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:57.415511 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:57.416596 master-0 kubenswrapper[7484]: I0312 20:57:57.415525 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:58.414558 master-0 kubenswrapper[7484]: I0312 20:57:58.414417 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:58.414558 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:58.414558 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:58.414558 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:58.415307 master-0 kubenswrapper[7484]: I0312 20:57:58.414582 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:57:59.414971 master-0 kubenswrapper[7484]: I0312 20:57:59.414889 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:57:59.414971 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:57:59.414971 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:57:59.414971 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:57:59.416016 master-0 kubenswrapper[7484]: I0312 20:57:59.414977 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:00.048049 master-0 kubenswrapper[7484]: I0312 20:58:00.047976 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/1.log" Mar 12 20:58:00.049255 master-0 kubenswrapper[7484]: I0312 20:58:00.049204 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/0.log" Mar 12 20:58:00.049391 master-0 kubenswrapper[7484]: I0312 20:58:00.049264 7484 generic.go:334] "Generic (PLEG): container finished" podID="2b71f537-1cc2-4645-8e50-23941635457c" containerID="72247b0dd06b6af33787ec8f35afadef48c9b0d4221e98fe5435e01a0186d2bf" exitCode=1 Mar 12 20:58:00.049391 master-0 kubenswrapper[7484]: I0312 20:58:00.049338 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerDied","Data":"72247b0dd06b6af33787ec8f35afadef48c9b0d4221e98fe5435e01a0186d2bf"} Mar 12 20:58:00.049590 master-0 kubenswrapper[7484]: I0312 20:58:00.049464 7484 scope.go:117] "RemoveContainer" containerID="ae373579849ec0d4a33d66c2a3f6f43fccdff39968b29197dcdc4792d7cd63f3" Mar 12 20:58:00.050478 master-0 kubenswrapper[7484]: I0312 20:58:00.050384 7484 scope.go:117] "RemoveContainer" containerID="72247b0dd06b6af33787ec8f35afadef48c9b0d4221e98fe5435e01a0186d2bf" Mar 12 20:58:00.051033 master-0 kubenswrapper[7484]: E0312 20:58:00.050937 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 20:58:00.414794 master-0 kubenswrapper[7484]: I0312 20:58:00.414698 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:00.414794 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:00.414794 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:00.414794 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:00.414794 master-0 kubenswrapper[7484]: I0312 20:58:00.414785 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:01.059506 master-0 kubenswrapper[7484]: I0312 20:58:01.059431 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/1.log" Mar 12 20:58:01.415205 master-0 kubenswrapper[7484]: I0312 20:58:01.415139 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:01.415205 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:01.415205 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:01.415205 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:01.416221 master-0 kubenswrapper[7484]: I0312 20:58:01.416115 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:02.414347 master-0 kubenswrapper[7484]: I0312 20:58:02.414242 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:02.414347 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:02.414347 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:02.414347 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:02.414347 master-0 kubenswrapper[7484]: I0312 20:58:02.414350 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:03.414140 master-0 kubenswrapper[7484]: I0312 20:58:03.414080 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:03.414140 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:03.414140 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:03.414140 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:03.415093 master-0 kubenswrapper[7484]: I0312 20:58:03.414167 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:04.415826 master-0 kubenswrapper[7484]: I0312 20:58:04.415672 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:04.415826 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:04.415826 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:04.415826 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:04.416491 master-0 kubenswrapper[7484]: I0312 20:58:04.415899 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:05.415645 master-0 kubenswrapper[7484]: I0312 20:58:05.415557 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:05.415645 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:05.415645 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:05.415645 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:05.416632 master-0 kubenswrapper[7484]: I0312 20:58:05.415663 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:06.413784 master-0 kubenswrapper[7484]: I0312 20:58:06.413711 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:06.413784 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:06.413784 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:06.413784 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:06.413784 master-0 kubenswrapper[7484]: I0312 20:58:06.413772 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:07.414224 master-0 kubenswrapper[7484]: I0312 20:58:07.414143 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:07.414224 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:07.414224 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:07.414224 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:07.414980 master-0 kubenswrapper[7484]: I0312 20:58:07.414274 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:08.414790 master-0 kubenswrapper[7484]: I0312 20:58:08.414685 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:08.414790 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:08.414790 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:08.414790 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:08.416069 master-0 kubenswrapper[7484]: I0312 20:58:08.414798 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:09.369821 master-0 kubenswrapper[7484]: I0312 20:58:09.369737 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:58:09.378660 master-0 kubenswrapper[7484]: I0312 20:58:09.378606 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 20:58:09.414113 master-0 kubenswrapper[7484]: I0312 20:58:09.414060 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:09.414113 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:09.414113 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:09.414113 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:09.414366 master-0 kubenswrapper[7484]: I0312 20:58:09.414120 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:10.414424 master-0 kubenswrapper[7484]: I0312 20:58:10.414271 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:10.414424 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:10.414424 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:10.414424 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:10.415652 master-0 kubenswrapper[7484]: I0312 20:58:10.414458 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:11.413399 master-0 kubenswrapper[7484]: I0312 20:58:11.413344 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:11.413399 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:11.413399 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:11.413399 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:11.413751 master-0 kubenswrapper[7484]: I0312 20:58:11.413419 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:12.413685 master-0 kubenswrapper[7484]: I0312 20:58:12.413609 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:12.413685 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:12.413685 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:12.413685 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:12.414442 master-0 kubenswrapper[7484]: I0312 20:58:12.413683 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:13.415305 master-0 kubenswrapper[7484]: I0312 20:58:13.415214 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:13.415305 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:13.415305 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:13.415305 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:13.415305 master-0 kubenswrapper[7484]: I0312 20:58:13.415303 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:13.734881 master-0 kubenswrapper[7484]: I0312 20:58:13.734626 7484 scope.go:117] "RemoveContainer" containerID="72247b0dd06b6af33787ec8f35afadef48c9b0d4221e98fe5435e01a0186d2bf" Mar 12 20:58:14.162757 master-0 kubenswrapper[7484]: I0312 20:58:14.162707 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/1.log" Mar 12 20:58:14.163237 master-0 kubenswrapper[7484]: I0312 20:58:14.163191 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d"} Mar 12 20:58:14.413296 master-0 kubenswrapper[7484]: I0312 20:58:14.413179 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:14.413296 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:14.413296 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:14.413296 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:14.413296 master-0 kubenswrapper[7484]: I0312 20:58:14.413260 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:15.413669 master-0 kubenswrapper[7484]: I0312 20:58:15.413580 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:15.413669 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:15.413669 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:15.413669 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:15.414582 master-0 kubenswrapper[7484]: I0312 20:58:15.413692 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:16.413291 master-0 kubenswrapper[7484]: I0312 20:58:16.413232 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:16.413291 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:16.413291 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:16.413291 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:16.413574 master-0 kubenswrapper[7484]: I0312 20:58:16.413313 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:17.414909 master-0 kubenswrapper[7484]: I0312 20:58:17.414825 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:17.414909 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:17.414909 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:17.414909 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:17.415893 master-0 kubenswrapper[7484]: I0312 20:58:17.415244 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:18.414611 master-0 kubenswrapper[7484]: I0312 20:58:18.414526 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:18.414611 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:18.414611 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:18.414611 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:18.415684 master-0 kubenswrapper[7484]: I0312 20:58:18.414620 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:19.415261 master-0 kubenswrapper[7484]: I0312 20:58:19.415179 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:19.415261 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:19.415261 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:19.415261 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:19.416201 master-0 kubenswrapper[7484]: I0312 20:58:19.415274 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:20.414829 master-0 kubenswrapper[7484]: I0312 20:58:20.414694 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:20.414829 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:20.414829 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:20.414829 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:20.415363 master-0 kubenswrapper[7484]: I0312 20:58:20.414843 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:21.413525 master-0 kubenswrapper[7484]: I0312 20:58:21.413460 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:21.413525 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:21.413525 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:21.413525 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:21.414045 master-0 kubenswrapper[7484]: I0312 20:58:21.413554 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:22.415072 master-0 kubenswrapper[7484]: I0312 20:58:22.414995 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:22.415072 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:22.415072 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:22.415072 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:22.416104 master-0 kubenswrapper[7484]: I0312 20:58:22.415089 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:23.413207 master-0 kubenswrapper[7484]: I0312 20:58:23.413156 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:23.413207 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:23.413207 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:23.413207 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:23.413670 master-0 kubenswrapper[7484]: I0312 20:58:23.413632 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:24.414060 master-0 kubenswrapper[7484]: I0312 20:58:24.413993 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:24.414060 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:24.414060 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:24.414060 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:24.414582 master-0 kubenswrapper[7484]: I0312 20:58:24.414082 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:25.415000 master-0 kubenswrapper[7484]: I0312 20:58:25.414933 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:25.415000 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:25.415000 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:25.415000 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:25.416062 master-0 kubenswrapper[7484]: I0312 20:58:25.415019 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:26.414258 master-0 kubenswrapper[7484]: I0312 20:58:26.414161 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:26.414258 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:26.414258 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:26.414258 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:26.414774 master-0 kubenswrapper[7484]: I0312 20:58:26.414265 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:27.415367 master-0 kubenswrapper[7484]: I0312 20:58:27.415254 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:27.415367 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:27.415367 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:27.415367 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:27.416564 master-0 kubenswrapper[7484]: I0312 20:58:27.415399 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:28.414997 master-0 kubenswrapper[7484]: I0312 20:58:28.414878 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:28.414997 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:28.414997 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:28.414997 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:28.416289 master-0 kubenswrapper[7484]: I0312 20:58:28.415004 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:29.414140 master-0 kubenswrapper[7484]: I0312 20:58:29.414050 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:29.414140 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:29.414140 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:29.414140 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:29.414691 master-0 kubenswrapper[7484]: I0312 20:58:29.414152 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:30.418491 master-0 kubenswrapper[7484]: I0312 20:58:30.418383 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:30.418491 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:30.418491 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:30.418491 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:30.419545 master-0 kubenswrapper[7484]: I0312 20:58:30.418520 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:31.414209 master-0 kubenswrapper[7484]: I0312 20:58:31.414127 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:31.414209 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:31.414209 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:31.414209 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:31.414209 master-0 kubenswrapper[7484]: I0312 20:58:31.414206 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:32.414697 master-0 kubenswrapper[7484]: I0312 20:58:32.414558 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:32.414697 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:32.414697 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:32.414697 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:32.416114 master-0 kubenswrapper[7484]: I0312 20:58:32.414754 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:33.414647 master-0 kubenswrapper[7484]: I0312 20:58:33.414525 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:33.414647 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:33.414647 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:33.414647 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:33.416035 master-0 kubenswrapper[7484]: I0312 20:58:33.414717 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:34.415099 master-0 kubenswrapper[7484]: I0312 20:58:34.414993 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:34.415099 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:34.415099 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:34.415099 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:34.416111 master-0 kubenswrapper[7484]: I0312 20:58:34.415123 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:35.412781 master-0 kubenswrapper[7484]: I0312 20:58:35.412725 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:35.412781 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:35.412781 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:35.412781 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:35.413103 master-0 kubenswrapper[7484]: I0312 20:58:35.412792 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:36.414506 master-0 kubenswrapper[7484]: I0312 20:58:36.414405 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:36.414506 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:36.414506 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:36.414506 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:36.414506 master-0 kubenswrapper[7484]: I0312 20:58:36.414500 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:37.414780 master-0 kubenswrapper[7484]: I0312 20:58:37.414667 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:37.414780 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:37.414780 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:37.414780 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:37.414780 master-0 kubenswrapper[7484]: I0312 20:58:37.414766 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:38.416200 master-0 kubenswrapper[7484]: I0312 20:58:38.415954 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:38.416200 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:38.416200 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:38.416200 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:38.417687 master-0 kubenswrapper[7484]: I0312 20:58:38.417600 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:39.414371 master-0 kubenswrapper[7484]: I0312 20:58:39.414273 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:39.414371 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:39.414371 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:39.414371 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:39.414371 master-0 kubenswrapper[7484]: I0312 20:58:39.414358 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:40.414796 master-0 kubenswrapper[7484]: I0312 20:58:40.414687 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:40.414796 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:40.414796 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:40.414796 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:40.414796 master-0 kubenswrapper[7484]: I0312 20:58:40.414762 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:41.416664 master-0 kubenswrapper[7484]: I0312 20:58:41.416496 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:41.416664 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:41.416664 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:41.416664 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:41.418016 master-0 kubenswrapper[7484]: I0312 20:58:41.416668 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:42.415446 master-0 kubenswrapper[7484]: I0312 20:58:42.415369 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:42.415446 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:42.415446 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:42.415446 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:42.415446 master-0 kubenswrapper[7484]: I0312 20:58:42.415450 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:43.414644 master-0 kubenswrapper[7484]: I0312 20:58:43.414560 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:43.414644 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:43.414644 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:43.414644 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:43.414644 master-0 kubenswrapper[7484]: I0312 20:58:43.414639 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:44.415168 master-0 kubenswrapper[7484]: I0312 20:58:44.415080 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:44.415168 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:44.415168 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:44.415168 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:44.416201 master-0 kubenswrapper[7484]: I0312 20:58:44.415173 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:45.414936 master-0 kubenswrapper[7484]: I0312 20:58:45.414795 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:45.414936 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:45.414936 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:45.414936 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:45.416157 master-0 kubenswrapper[7484]: I0312 20:58:45.414926 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:46.414462 master-0 kubenswrapper[7484]: I0312 20:58:46.414343 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:46.414462 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:46.414462 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:46.414462 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:46.416563 master-0 kubenswrapper[7484]: I0312 20:58:46.414488 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:47.416772 master-0 kubenswrapper[7484]: I0312 20:58:47.415737 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:47.416772 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:47.416772 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:47.416772 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:47.416772 master-0 kubenswrapper[7484]: I0312 20:58:47.415915 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:48.416427 master-0 kubenswrapper[7484]: I0312 20:58:48.415980 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:48.416427 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:48.416427 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:48.416427 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:48.416427 master-0 kubenswrapper[7484]: I0312 20:58:48.416120 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:49.415299 master-0 kubenswrapper[7484]: I0312 20:58:49.415192 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:49.415299 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:49.415299 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:49.415299 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:49.415299 master-0 kubenswrapper[7484]: I0312 20:58:49.415303 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:50.416345 master-0 kubenswrapper[7484]: I0312 20:58:50.416240 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:50.416345 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:50.416345 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:50.416345 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:50.416345 master-0 kubenswrapper[7484]: I0312 20:58:50.416342 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:51.414604 master-0 kubenswrapper[7484]: I0312 20:58:51.414517 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:51.414604 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:51.414604 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:51.414604 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:51.415617 master-0 kubenswrapper[7484]: I0312 20:58:51.414605 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:52.415635 master-0 kubenswrapper[7484]: I0312 20:58:52.415359 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:52.415635 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:52.415635 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:52.415635 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:52.415635 master-0 kubenswrapper[7484]: I0312 20:58:52.415631 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:53.414666 master-0 kubenswrapper[7484]: I0312 20:58:53.414596 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:53.414666 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:53.414666 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:53.414666 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:53.415062 master-0 kubenswrapper[7484]: I0312 20:58:53.414695 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:54.414531 master-0 kubenswrapper[7484]: I0312 20:58:54.414411 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:54.414531 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:54.414531 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:54.414531 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:54.414531 master-0 kubenswrapper[7484]: I0312 20:58:54.414519 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:55.413074 master-0 kubenswrapper[7484]: I0312 20:58:55.413008 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:55.413074 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:55.413074 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:55.413074 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:55.413443 master-0 kubenswrapper[7484]: I0312 20:58:55.413079 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:56.414541 master-0 kubenswrapper[7484]: I0312 20:58:56.414464 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:56.414541 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:56.414541 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:56.414541 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:56.414541 master-0 kubenswrapper[7484]: I0312 20:58:56.414529 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:57.414071 master-0 kubenswrapper[7484]: I0312 20:58:57.413977 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:57.414071 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:57.414071 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:57.414071 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:57.415395 master-0 kubenswrapper[7484]: I0312 20:58:57.414120 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:58.415430 master-0 kubenswrapper[7484]: I0312 20:58:58.415333 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:58.415430 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:58.415430 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:58.415430 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:58.416951 master-0 kubenswrapper[7484]: I0312 20:58:58.415429 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:58:59.414351 master-0 kubenswrapper[7484]: I0312 20:58:59.414262 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:58:59.414351 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:58:59.414351 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:58:59.414351 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:58:59.414779 master-0 kubenswrapper[7484]: I0312 20:58:59.414353 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:00.415740 master-0 kubenswrapper[7484]: I0312 20:59:00.415587 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:00.415740 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:00.415740 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:00.415740 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:00.416779 master-0 kubenswrapper[7484]: I0312 20:59:00.415770 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:01.414045 master-0 kubenswrapper[7484]: I0312 20:59:01.413953 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:01.414045 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:01.414045 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:01.414045 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:01.414471 master-0 kubenswrapper[7484]: I0312 20:59:01.414044 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:02.413721 master-0 kubenswrapper[7484]: I0312 20:59:02.413647 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:02.413721 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:02.413721 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:02.413721 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:02.413721 master-0 kubenswrapper[7484]: I0312 20:59:02.413711 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:03.414625 master-0 kubenswrapper[7484]: I0312 20:59:03.414525 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:03.414625 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:03.414625 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:03.414625 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:03.416836 master-0 kubenswrapper[7484]: I0312 20:59:03.414638 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:04.415374 master-0 kubenswrapper[7484]: I0312 20:59:04.415264 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:04.415374 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:04.415374 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:04.415374 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:04.415374 master-0 kubenswrapper[7484]: I0312 20:59:04.415363 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:05.415155 master-0 kubenswrapper[7484]: I0312 20:59:05.415072 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:05.415155 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:05.415155 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:05.415155 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:05.415902 master-0 kubenswrapper[7484]: I0312 20:59:05.415168 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:06.415126 master-0 kubenswrapper[7484]: I0312 20:59:06.415057 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:06.415126 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:06.415126 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:06.415126 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:06.415438 master-0 kubenswrapper[7484]: I0312 20:59:06.415146 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:07.414101 master-0 kubenswrapper[7484]: I0312 20:59:07.414042 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:07.414101 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:07.414101 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:07.414101 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:07.415911 master-0 kubenswrapper[7484]: I0312 20:59:07.414638 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:08.419203 master-0 kubenswrapper[7484]: I0312 20:59:08.419109 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:08.419203 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:08.419203 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:08.419203 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:08.420161 master-0 kubenswrapper[7484]: I0312 20:59:08.419196 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:09.414539 master-0 kubenswrapper[7484]: I0312 20:59:09.414477 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:09.414539 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:09.414539 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:09.414539 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:09.417076 master-0 kubenswrapper[7484]: I0312 20:59:09.416571 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:10.415327 master-0 kubenswrapper[7484]: I0312 20:59:10.415170 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:10.415327 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:10.415327 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:10.415327 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:10.416880 master-0 kubenswrapper[7484]: I0312 20:59:10.415341 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:11.416704 master-0 kubenswrapper[7484]: I0312 20:59:11.416549 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:11.416704 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:11.416704 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:11.416704 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:11.419133 master-0 kubenswrapper[7484]: I0312 20:59:11.416767 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:12.417409 master-0 kubenswrapper[7484]: I0312 20:59:12.417321 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:12.417409 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:12.417409 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:12.417409 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:12.417962 master-0 kubenswrapper[7484]: I0312 20:59:12.417429 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:13.414475 master-0 kubenswrapper[7484]: I0312 20:59:13.414367 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 20:59:13.414475 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 20:59:13.414475 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 20:59:13.414475 master-0 kubenswrapper[7484]: healthz check failed Mar 12 20:59:13.414475 master-0 kubenswrapper[7484]: I0312 20:59:13.414462 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 20:59:13.415068 master-0 kubenswrapper[7484]: I0312 20:59:13.414534 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 20:59:13.415510 master-0 kubenswrapper[7484]: I0312 20:59:13.415455 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"41145e0fa78e157774eb7d7a70c1dca5f300d506a37a6e9227272112a6ab2153"} pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" containerMessage="Container router failed startup probe, will be restarted" Mar 12 20:59:13.415604 master-0 kubenswrapper[7484]: I0312 20:59:13.415528 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" containerID="cri-o://41145e0fa78e157774eb7d7a70c1dca5f300d506a37a6e9227272112a6ab2153" gracePeriod=3600 Mar 12 21:00:00.258077 master-0 kubenswrapper[7484]: I0312 21:00:00.258003 7484 generic.go:334] "Generic (PLEG): container finished" podID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerID="41145e0fa78e157774eb7d7a70c1dca5f300d506a37a6e9227272112a6ab2153" exitCode=0 Mar 12 21:00:00.258077 master-0 kubenswrapper[7484]: I0312 21:00:00.258071 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerDied","Data":"41145e0fa78e157774eb7d7a70c1dca5f300d506a37a6e9227272112a6ab2153"} Mar 12 21:00:00.259284 master-0 kubenswrapper[7484]: I0312 21:00:00.258113 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerStarted","Data":"1acfa9d2750b23b6fbd73dc65a33ac93a90684811b79c1a559d68754a4e63f2b"} Mar 12 21:00:00.411607 master-0 kubenswrapper[7484]: I0312 21:00:00.411489 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:00:00.411607 master-0 kubenswrapper[7484]: I0312 21:00:00.411559 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:00:00.415871 master-0 kubenswrapper[7484]: I0312 21:00:00.415765 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:00.415871 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:00.415871 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:00.415871 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:00.416265 master-0 kubenswrapper[7484]: I0312 21:00:00.415897 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:01.414175 master-0 kubenswrapper[7484]: I0312 21:00:01.414106 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:01.414175 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:01.414175 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:01.414175 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:01.415251 master-0 kubenswrapper[7484]: I0312 21:00:01.414188 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:02.414999 master-0 kubenswrapper[7484]: I0312 21:00:02.414907 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:02.414999 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:02.414999 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:02.414999 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:02.414999 master-0 kubenswrapper[7484]: I0312 21:00:02.414989 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:03.415466 master-0 kubenswrapper[7484]: I0312 21:00:03.415390 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:03.415466 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:03.415466 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:03.415466 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:03.416458 master-0 kubenswrapper[7484]: I0312 21:00:03.415485 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:04.415317 master-0 kubenswrapper[7484]: I0312 21:00:04.415223 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:04.415317 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:04.415317 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:04.415317 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:04.415317 master-0 kubenswrapper[7484]: I0312 21:00:04.415314 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:05.414482 master-0 kubenswrapper[7484]: I0312 21:00:05.414003 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:05.414482 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:05.414482 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:05.414482 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:05.414482 master-0 kubenswrapper[7484]: I0312 21:00:05.414080 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:06.414922 master-0 kubenswrapper[7484]: I0312 21:00:06.414734 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:06.414922 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:06.414922 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:06.414922 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:06.414922 master-0 kubenswrapper[7484]: I0312 21:00:06.414921 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:07.414481 master-0 kubenswrapper[7484]: I0312 21:00:07.414424 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:07.414481 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:07.414481 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:07.414481 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:07.414840 master-0 kubenswrapper[7484]: I0312 21:00:07.414506 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:08.415696 master-0 kubenswrapper[7484]: I0312 21:00:08.415630 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:08.415696 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:08.415696 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:08.415696 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:08.416801 master-0 kubenswrapper[7484]: I0312 21:00:08.415721 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:09.415612 master-0 kubenswrapper[7484]: I0312 21:00:09.415524 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:09.415612 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:09.415612 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:09.415612 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:09.416905 master-0 kubenswrapper[7484]: I0312 21:00:09.415634 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:10.414609 master-0 kubenswrapper[7484]: I0312 21:00:10.414498 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:10.414609 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:10.414609 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:10.414609 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:10.415361 master-0 kubenswrapper[7484]: I0312 21:00:10.414624 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:11.415559 master-0 kubenswrapper[7484]: I0312 21:00:11.415413 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:11.415559 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:11.415559 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:11.415559 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:11.415559 master-0 kubenswrapper[7484]: I0312 21:00:11.415510 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:12.415926 master-0 kubenswrapper[7484]: I0312 21:00:12.415852 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:12.415926 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:12.415926 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:12.415926 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:12.416914 master-0 kubenswrapper[7484]: I0312 21:00:12.415966 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:13.415010 master-0 kubenswrapper[7484]: I0312 21:00:13.414931 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:13.415010 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:13.415010 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:13.415010 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:13.415010 master-0 kubenswrapper[7484]: I0312 21:00:13.415007 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:14.415153 master-0 kubenswrapper[7484]: I0312 21:00:14.415045 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:14.415153 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:14.415153 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:14.415153 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:14.415153 master-0 kubenswrapper[7484]: I0312 21:00:14.415145 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:15.415946 master-0 kubenswrapper[7484]: I0312 21:00:15.415863 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:15.415946 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:15.415946 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:15.415946 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:15.417122 master-0 kubenswrapper[7484]: I0312 21:00:15.415961 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:15.713389 master-0 kubenswrapper[7484]: I0312 21:00:15.713239 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-67vs7"] Mar 12 21:00:15.728364 master-0 kubenswrapper[7484]: I0312 21:00:15.728310 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:15.730706 master-0 kubenswrapper[7484]: I0312 21:00:15.730656 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-zfxcx" Mar 12 21:00:15.730928 master-0 kubenswrapper[7484]: I0312 21:00:15.730901 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 21:00:15.731166 master-0 kubenswrapper[7484]: I0312 21:00:15.731003 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-67vs7"] Mar 12 21:00:15.731407 master-0 kubenswrapper[7484]: I0312 21:00:15.731317 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 21:00:15.732312 master-0 kubenswrapper[7484]: I0312 21:00:15.732283 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 21:00:15.818510 master-0 kubenswrapper[7484]: I0312 21:00:15.818421 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:15.818510 master-0 kubenswrapper[7484]: I0312 21:00:15.818519 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xth7s\" (UniqueName: \"kubernetes.io/projected/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-kube-api-access-xth7s\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:15.920344 master-0 kubenswrapper[7484]: I0312 21:00:15.920252 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:15.920641 master-0 kubenswrapper[7484]: I0312 21:00:15.920412 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xth7s\" (UniqueName: \"kubernetes.io/projected/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-kube-api-access-xth7s\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:15.927974 master-0 kubenswrapper[7484]: I0312 21:00:15.926394 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:15.959233 master-0 kubenswrapper[7484]: I0312 21:00:15.959165 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xth7s\" (UniqueName: \"kubernetes.io/projected/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-kube-api-access-xth7s\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:16.069630 master-0 kubenswrapper[7484]: I0312 21:00:16.069539 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:00:16.405918 master-0 kubenswrapper[7484]: I0312 21:00:16.405715 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/2.log" Mar 12 21:00:16.406799 master-0 kubenswrapper[7484]: I0312 21:00:16.406679 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/1.log" Mar 12 21:00:16.407588 master-0 kubenswrapper[7484]: I0312 21:00:16.407525 7484 generic.go:334] "Generic (PLEG): container finished" podID="2b71f537-1cc2-4645-8e50-23941635457c" containerID="2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d" exitCode=1 Mar 12 21:00:16.407710 master-0 kubenswrapper[7484]: I0312 21:00:16.407602 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerDied","Data":"2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d"} Mar 12 21:00:16.407710 master-0 kubenswrapper[7484]: I0312 21:00:16.407684 7484 scope.go:117] "RemoveContainer" containerID="72247b0dd06b6af33787ec8f35afadef48c9b0d4221e98fe5435e01a0186d2bf" Mar 12 21:00:16.408536 master-0 kubenswrapper[7484]: I0312 21:00:16.408486 7484 scope.go:117] "RemoveContainer" containerID="2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d" Mar 12 21:00:16.409749 master-0 kubenswrapper[7484]: E0312 21:00:16.409702 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:00:16.413910 master-0 kubenswrapper[7484]: I0312 21:00:16.413740 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:16.413910 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:16.413910 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:16.413910 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:16.416624 master-0 kubenswrapper[7484]: I0312 21:00:16.413939 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:16.584804 master-0 kubenswrapper[7484]: I0312 21:00:16.584728 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-67vs7"] Mar 12 21:00:16.590533 master-0 kubenswrapper[7484]: W0312 21:00:16.590474 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda539e1c7_3799_4d43_8f2f_d5e5c0ffd918.slice/crio-dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6 WatchSource:0}: Error finding container dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6: Status 404 returned error can't find the container with id dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6 Mar 12 21:00:17.414967 master-0 kubenswrapper[7484]: I0312 21:00:17.414884 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:17.414967 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:17.414967 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:17.414967 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:17.415298 master-0 kubenswrapper[7484]: I0312 21:00:17.414966 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:17.418692 master-0 kubenswrapper[7484]: I0312 21:00:17.418649 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/2.log" Mar 12 21:00:17.421883 master-0 kubenswrapper[7484]: I0312 21:00:17.421800 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-67vs7" event={"ID":"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918","Type":"ContainerStarted","Data":"ac38845fc712f5b63ffcdd5782ee5b0c9000ccfdb4721e1ed162e432e5dc59d8"} Mar 12 21:00:17.421961 master-0 kubenswrapper[7484]: I0312 21:00:17.421904 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-67vs7" event={"ID":"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918","Type":"ContainerStarted","Data":"dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6"} Mar 12 21:00:17.445278 master-0 kubenswrapper[7484]: I0312 21:00:17.445144 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-67vs7" podStartSLOduration=2.445114849 podStartE2EDuration="2.445114849s" podCreationTimestamp="2026-03-12 21:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:00:17.440996794 +0000 UTC m=+629.926265626" watchObservedRunningTime="2026-03-12 21:00:17.445114849 +0000 UTC m=+629.930383691" Mar 12 21:00:18.414712 master-0 kubenswrapper[7484]: I0312 21:00:18.414621 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:18.414712 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:18.414712 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:18.414712 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:18.415087 master-0 kubenswrapper[7484]: I0312 21:00:18.414737 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:19.414296 master-0 kubenswrapper[7484]: I0312 21:00:19.414206 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:19.414296 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:19.414296 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:19.414296 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:19.415461 master-0 kubenswrapper[7484]: I0312 21:00:19.414317 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:20.416385 master-0 kubenswrapper[7484]: I0312 21:00:20.416264 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:20.416385 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:20.416385 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:20.416385 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:20.417373 master-0 kubenswrapper[7484]: I0312 21:00:20.416396 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:21.414131 master-0 kubenswrapper[7484]: I0312 21:00:21.414029 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:21.414131 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:21.414131 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:21.414131 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:21.414891 master-0 kubenswrapper[7484]: I0312 21:00:21.414142 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:22.414945 master-0 kubenswrapper[7484]: I0312 21:00:22.414860 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:22.414945 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:22.414945 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:22.414945 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:22.415905 master-0 kubenswrapper[7484]: I0312 21:00:22.414957 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:23.416164 master-0 kubenswrapper[7484]: I0312 21:00:23.414387 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:23.416164 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:23.416164 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:23.416164 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:23.416164 master-0 kubenswrapper[7484]: I0312 21:00:23.414468 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:24.414427 master-0 kubenswrapper[7484]: I0312 21:00:24.414324 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:24.414427 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:24.414427 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:24.414427 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:24.414427 master-0 kubenswrapper[7484]: I0312 21:00:24.414403 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:25.414551 master-0 kubenswrapper[7484]: I0312 21:00:25.414466 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:25.414551 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:25.414551 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:25.414551 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:25.415579 master-0 kubenswrapper[7484]: I0312 21:00:25.414574 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:26.416319 master-0 kubenswrapper[7484]: I0312 21:00:26.415334 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:26.416319 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:26.416319 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:26.416319 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:26.416319 master-0 kubenswrapper[7484]: I0312 21:00:26.415438 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:26.734254 master-0 kubenswrapper[7484]: I0312 21:00:26.734059 7484 scope.go:117] "RemoveContainer" containerID="2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d" Mar 12 21:00:26.734529 master-0 kubenswrapper[7484]: E0312 21:00:26.734475 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:00:27.414402 master-0 kubenswrapper[7484]: I0312 21:00:27.414290 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:27.414402 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:27.414402 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:27.414402 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:27.414991 master-0 kubenswrapper[7484]: I0312 21:00:27.414405 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:28.414622 master-0 kubenswrapper[7484]: I0312 21:00:28.414543 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:28.414622 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:28.414622 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:28.414622 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:28.415260 master-0 kubenswrapper[7484]: I0312 21:00:28.414639 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:29.415329 master-0 kubenswrapper[7484]: I0312 21:00:29.415230 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:29.415329 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:29.415329 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:29.415329 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:29.415329 master-0 kubenswrapper[7484]: I0312 21:00:29.415328 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:30.417705 master-0 kubenswrapper[7484]: I0312 21:00:30.417606 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:30.417705 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:30.417705 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:30.417705 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:30.418711 master-0 kubenswrapper[7484]: I0312 21:00:30.417714 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:31.415903 master-0 kubenswrapper[7484]: I0312 21:00:31.415784 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:31.415903 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:31.415903 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:31.415903 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:31.416394 master-0 kubenswrapper[7484]: I0312 21:00:31.415933 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:32.417965 master-0 kubenswrapper[7484]: I0312 21:00:32.417892 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:32.417965 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:32.417965 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:32.417965 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:32.419157 master-0 kubenswrapper[7484]: I0312 21:00:32.417993 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:33.419865 master-0 kubenswrapper[7484]: I0312 21:00:33.419699 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:33.419865 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:33.419865 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:33.419865 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:33.419865 master-0 kubenswrapper[7484]: I0312 21:00:33.419853 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:34.417085 master-0 kubenswrapper[7484]: I0312 21:00:34.417001 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:34.417085 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:34.417085 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:34.417085 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:34.417614 master-0 kubenswrapper[7484]: I0312 21:00:34.417092 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:35.413848 master-0 kubenswrapper[7484]: I0312 21:00:35.413762 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:35.413848 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:35.413848 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:35.413848 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:35.414592 master-0 kubenswrapper[7484]: I0312 21:00:35.413850 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:36.415362 master-0 kubenswrapper[7484]: I0312 21:00:36.415269 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:36.415362 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:36.415362 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:36.415362 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:36.416636 master-0 kubenswrapper[7484]: I0312 21:00:36.415375 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:37.415188 master-0 kubenswrapper[7484]: I0312 21:00:37.414552 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:37.415188 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:37.415188 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:37.415188 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:37.415188 master-0 kubenswrapper[7484]: I0312 21:00:37.414656 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:37.737890 master-0 kubenswrapper[7484]: I0312 21:00:37.737702 7484 scope.go:117] "RemoveContainer" containerID="2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d" Mar 12 21:00:38.415449 master-0 kubenswrapper[7484]: I0312 21:00:38.415380 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:38.415449 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:38.415449 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:38.415449 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:38.416342 master-0 kubenswrapper[7484]: I0312 21:00:38.415477 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:38.592546 master-0 kubenswrapper[7484]: I0312 21:00:38.592492 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/2.log" Mar 12 21:00:38.592863 master-0 kubenswrapper[7484]: I0312 21:00:38.592800 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d"} Mar 12 21:00:39.415488 master-0 kubenswrapper[7484]: I0312 21:00:39.415391 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:39.415488 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:39.415488 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:39.415488 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:39.416704 master-0 kubenswrapper[7484]: I0312 21:00:39.415520 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:40.415723 master-0 kubenswrapper[7484]: I0312 21:00:40.415594 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:40.415723 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:40.415723 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:40.415723 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:40.417071 master-0 kubenswrapper[7484]: I0312 21:00:40.415781 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:41.414304 master-0 kubenswrapper[7484]: I0312 21:00:41.414202 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:41.414304 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:41.414304 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:41.414304 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:41.414304 master-0 kubenswrapper[7484]: I0312 21:00:41.414283 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:42.415528 master-0 kubenswrapper[7484]: I0312 21:00:42.415421 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:42.415528 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:42.415528 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:42.415528 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:42.415528 master-0 kubenswrapper[7484]: I0312 21:00:42.415534 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:43.414441 master-0 kubenswrapper[7484]: I0312 21:00:43.414344 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:43.414441 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:43.414441 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:43.414441 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:43.415329 master-0 kubenswrapper[7484]: I0312 21:00:43.414443 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:44.414325 master-0 kubenswrapper[7484]: I0312 21:00:44.414251 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:44.414325 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:44.414325 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:44.414325 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:44.415122 master-0 kubenswrapper[7484]: I0312 21:00:44.414350 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:45.414763 master-0 kubenswrapper[7484]: I0312 21:00:45.414700 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:45.414763 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:45.414763 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:45.414763 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:45.415504 master-0 kubenswrapper[7484]: I0312 21:00:45.414783 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:46.415075 master-0 kubenswrapper[7484]: I0312 21:00:46.414975 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:46.415075 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:46.415075 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:46.415075 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:46.416245 master-0 kubenswrapper[7484]: I0312 21:00:46.415091 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:47.413932 master-0 kubenswrapper[7484]: I0312 21:00:47.413829 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:47.413932 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:47.413932 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:47.413932 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:47.414236 master-0 kubenswrapper[7484]: I0312 21:00:47.413957 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:48.414002 master-0 kubenswrapper[7484]: I0312 21:00:48.413937 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:48.414002 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:48.414002 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:48.414002 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:48.414444 master-0 kubenswrapper[7484]: I0312 21:00:48.414027 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:49.415241 master-0 kubenswrapper[7484]: I0312 21:00:49.415155 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:49.415241 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:49.415241 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:49.415241 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:49.416414 master-0 kubenswrapper[7484]: I0312 21:00:49.415293 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:50.414637 master-0 kubenswrapper[7484]: I0312 21:00:50.414531 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:50.414637 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:50.414637 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:50.414637 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:50.414637 master-0 kubenswrapper[7484]: I0312 21:00:50.414629 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:51.417271 master-0 kubenswrapper[7484]: I0312 21:00:51.417197 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:51.417271 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:51.417271 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:51.417271 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:51.418321 master-0 kubenswrapper[7484]: I0312 21:00:51.417299 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:52.415567 master-0 kubenswrapper[7484]: I0312 21:00:52.415489 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:52.415567 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:52.415567 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:52.415567 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:52.416138 master-0 kubenswrapper[7484]: I0312 21:00:52.415584 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:53.414512 master-0 kubenswrapper[7484]: I0312 21:00:53.414411 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:53.414512 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:53.414512 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:53.414512 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:53.415679 master-0 kubenswrapper[7484]: I0312 21:00:53.414559 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:53.774680 master-0 kubenswrapper[7484]: I0312 21:00:53.774285 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kbdkh"] Mar 12 21:00:53.776306 master-0 kubenswrapper[7484]: I0312 21:00:53.776282 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:53.782056 master-0 kubenswrapper[7484]: I0312 21:00:53.780931 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-jfnzs" Mar 12 21:00:53.782056 master-0 kubenswrapper[7484]: I0312 21:00:53.781193 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 12 21:00:53.933528 master-0 kubenswrapper[7484]: I0312 21:00:53.933474 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:53.933724 master-0 kubenswrapper[7484]: I0312 21:00:53.933572 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:53.933724 master-0 kubenswrapper[7484]: I0312 21:00:53.933646 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-ready\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:53.933791 master-0 kubenswrapper[7484]: I0312 21:00:53.933735 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxkfb\" (UniqueName: \"kubernetes.io/projected/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-kube-api-access-xxkfb\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.035078 master-0 kubenswrapper[7484]: I0312 21:00:54.034977 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxkfb\" (UniqueName: \"kubernetes.io/projected/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-kube-api-access-xxkfb\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.035281 master-0 kubenswrapper[7484]: I0312 21:00:54.035266 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.035386 master-0 kubenswrapper[7484]: I0312 21:00:54.035374 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.035483 master-0 kubenswrapper[7484]: I0312 21:00:54.035471 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-ready\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.035650 master-0 kubenswrapper[7484]: I0312 21:00:54.035609 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.035996 master-0 kubenswrapper[7484]: I0312 21:00:54.035980 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-ready\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.036479 master-0 kubenswrapper[7484]: I0312 21:00:54.036443 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.056112 master-0 kubenswrapper[7484]: I0312 21:00:54.056077 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxkfb\" (UniqueName: \"kubernetes.io/projected/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-kube-api-access-xxkfb\") pod \"cni-sysctl-allowlist-ds-kbdkh\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.106843 master-0 kubenswrapper[7484]: I0312 21:00:54.106663 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:54.122958 master-0 kubenswrapper[7484]: W0312 21:00:54.122892 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfba7834_c034_42c6_a0c2_cfba4a1b1baa.slice/crio-8eba3a05c5df91e4a5afa89f2996ec27fde1995b86b2affbd16eb620fb03627f WatchSource:0}: Error finding container 8eba3a05c5df91e4a5afa89f2996ec27fde1995b86b2affbd16eb620fb03627f: Status 404 returned error can't find the container with id 8eba3a05c5df91e4a5afa89f2996ec27fde1995b86b2affbd16eb620fb03627f Mar 12 21:00:54.416453 master-0 kubenswrapper[7484]: I0312 21:00:54.416393 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:54.416453 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:54.416453 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:54.416453 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:54.419051 master-0 kubenswrapper[7484]: I0312 21:00:54.416460 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:54.725146 master-0 kubenswrapper[7484]: I0312 21:00:54.725067 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" event={"ID":"cfba7834-c034-42c6-a0c2-cfba4a1b1baa","Type":"ContainerStarted","Data":"7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b"} Mar 12 21:00:54.725358 master-0 kubenswrapper[7484]: I0312 21:00:54.725166 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" event={"ID":"cfba7834-c034-42c6-a0c2-cfba4a1b1baa","Type":"ContainerStarted","Data":"8eba3a05c5df91e4a5afa89f2996ec27fde1995b86b2affbd16eb620fb03627f"} Mar 12 21:00:54.725532 master-0 kubenswrapper[7484]: I0312 21:00:54.725463 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:55.414540 master-0 kubenswrapper[7484]: I0312 21:00:55.414466 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:55.414540 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:55.414540 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:55.414540 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:55.415249 master-0 kubenswrapper[7484]: I0312 21:00:55.414554 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:55.774367 master-0 kubenswrapper[7484]: I0312 21:00:55.774203 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:00:55.803391 master-0 kubenswrapper[7484]: I0312 21:00:55.803239 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" podStartSLOduration=2.803207559 podStartE2EDuration="2.803207559s" podCreationTimestamp="2026-03-12 21:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:00:54.746031426 +0000 UTC m=+667.231300268" watchObservedRunningTime="2026-03-12 21:00:55.803207559 +0000 UTC m=+668.288476431" Mar 12 21:00:56.414793 master-0 kubenswrapper[7484]: I0312 21:00:56.414670 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:56.414793 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:56.414793 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:56.414793 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:56.414793 master-0 kubenswrapper[7484]: I0312 21:00:56.414766 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:56.768630 master-0 kubenswrapper[7484]: I0312 21:00:56.767851 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kbdkh"] Mar 12 21:00:57.415239 master-0 kubenswrapper[7484]: I0312 21:00:57.415118 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:57.415239 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:57.415239 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:57.415239 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:57.416372 master-0 kubenswrapper[7484]: I0312 21:00:57.415255 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:57.754264 master-0 kubenswrapper[7484]: I0312 21:00:57.753908 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" gracePeriod=30 Mar 12 21:00:58.413938 master-0 kubenswrapper[7484]: I0312 21:00:58.413730 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:58.413938 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:58.413938 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:58.413938 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:58.414122 master-0 kubenswrapper[7484]: I0312 21:00:58.414006 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:59.414462 master-0 kubenswrapper[7484]: I0312 21:00:59.414354 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:00:59.414462 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:00:59.414462 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:00:59.414462 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:00:59.415613 master-0 kubenswrapper[7484]: I0312 21:00:59.414465 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:00:59.787791 master-0 kubenswrapper[7484]: I0312 21:00:59.787597 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 21:00:59.788871 master-0 kubenswrapper[7484]: I0312 21:00:59.788785 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.794532 master-0 kubenswrapper[7484]: I0312 21:00:59.794455 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 21:00:59.796599 master-0 kubenswrapper[7484]: I0312 21:00:59.796517 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-xq8cf" Mar 12 21:00:59.797040 master-0 kubenswrapper[7484]: I0312 21:00:59.796971 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 21:00:59.832572 master-0 kubenswrapper[7484]: I0312 21:00:59.832470 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6afe7e-de9d-41d3-8e34-9523a46da697-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.832572 master-0 kubenswrapper[7484]: I0312 21:00:59.832572 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-var-lock\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.833009 master-0 kubenswrapper[7484]: I0312 21:00:59.832706 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.933920 master-0 kubenswrapper[7484]: I0312 21:00:59.933830 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.934176 master-0 kubenswrapper[7484]: I0312 21:00:59.933957 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6afe7e-de9d-41d3-8e34-9523a46da697-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.934176 master-0 kubenswrapper[7484]: I0312 21:00:59.933957 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.934176 master-0 kubenswrapper[7484]: I0312 21:00:59.933983 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-var-lock\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.934176 master-0 kubenswrapper[7484]: I0312 21:00:59.934037 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-var-lock\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:00:59.953134 master-0 kubenswrapper[7484]: I0312 21:00:59.953057 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6afe7e-de9d-41d3-8e34-9523a46da697-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:01:00.123391 master-0 kubenswrapper[7484]: I0312 21:01:00.123285 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:01:00.179756 master-0 kubenswrapper[7484]: I0312 21:01:00.179611 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 12 21:01:00.197847 master-0 kubenswrapper[7484]: I0312 21:01:00.195447 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 12 21:01:00.197847 master-0 kubenswrapper[7484]: I0312 21:01:00.195740 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.202601 master-0 kubenswrapper[7484]: I0312 21:01:00.201677 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 12 21:01:00.202601 master-0 kubenswrapper[7484]: I0312 21:01:00.201887 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-dhrfh" Mar 12 21:01:00.239501 master-0 kubenswrapper[7484]: I0312 21:01:00.239454 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-var-lock\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.239795 master-0 kubenswrapper[7484]: I0312 21:01:00.239765 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.240054 master-0 kubenswrapper[7484]: I0312 21:01:00.240026 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d919d0a-f152-43da-aec3-080812c0d2d6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.341136 master-0 kubenswrapper[7484]: I0312 21:01:00.341064 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-var-lock\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.341136 master-0 kubenswrapper[7484]: I0312 21:01:00.341142 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.341543 master-0 kubenswrapper[7484]: I0312 21:01:00.341178 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d919d0a-f152-43da-aec3-080812c0d2d6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.341543 master-0 kubenswrapper[7484]: I0312 21:01:00.341497 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-var-lock\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.341769 master-0 kubenswrapper[7484]: I0312 21:01:00.341584 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.361878 master-0 kubenswrapper[7484]: I0312 21:01:00.360100 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d919d0a-f152-43da-aec3-080812c0d2d6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.425857 master-0 kubenswrapper[7484]: I0312 21:01:00.425646 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:00.425857 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:00.425857 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:00.425857 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:00.425857 master-0 kubenswrapper[7484]: I0312 21:01:00.425748 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:00.547912 master-0 kubenswrapper[7484]: I0312 21:01:00.547785 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:00.630216 master-0 kubenswrapper[7484]: I0312 21:01:00.630110 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 12 21:01:00.784388 master-0 kubenswrapper[7484]: I0312 21:01:00.784334 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0c6afe7e-de9d-41d3-8e34-9523a46da697","Type":"ContainerStarted","Data":"28c9b7d298a5e9f87b7b79f9bc1b7d09be186a38e9c6487e815fa087b10965ba"} Mar 12 21:01:01.080284 master-0 kubenswrapper[7484]: I0312 21:01:01.080216 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 12 21:01:01.090158 master-0 kubenswrapper[7484]: W0312 21:01:01.090098 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5d919d0a_f152_43da_aec3_080812c0d2d6.slice/crio-ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe WatchSource:0}: Error finding container ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe: Status 404 returned error can't find the container with id ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe Mar 12 21:01:01.415296 master-0 kubenswrapper[7484]: I0312 21:01:01.415205 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:01.415296 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:01.415296 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:01.415296 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:01.415296 master-0 kubenswrapper[7484]: I0312 21:01:01.415294 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:01.791757 master-0 kubenswrapper[7484]: I0312 21:01:01.791705 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5d919d0a-f152-43da-aec3-080812c0d2d6","Type":"ContainerStarted","Data":"607e25a8dd52c1bd5d656d7e56ad63215f5d6ac7b9578ad98c15a18a5607da53"} Mar 12 21:01:01.791757 master-0 kubenswrapper[7484]: I0312 21:01:01.791760 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5d919d0a-f152-43da-aec3-080812c0d2d6","Type":"ContainerStarted","Data":"ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe"} Mar 12 21:01:01.794025 master-0 kubenswrapper[7484]: I0312 21:01:01.793961 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0c6afe7e-de9d-41d3-8e34-9523a46da697","Type":"ContainerStarted","Data":"99189d1662670a8accfafb7d98b62dd2bd3324bd586c75f160c786893e14a45b"} Mar 12 21:01:01.812564 master-0 kubenswrapper[7484]: I0312 21:01:01.812452 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=1.812433231 podStartE2EDuration="1.812433231s" podCreationTimestamp="2026-03-12 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:01.811291413 +0000 UTC m=+674.296560225" watchObservedRunningTime="2026-03-12 21:01:01.812433231 +0000 UTC m=+674.297702043" Mar 12 21:01:01.837427 master-0 kubenswrapper[7484]: I0312 21:01:01.837353 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.837332061 podStartE2EDuration="2.837332061s" podCreationTimestamp="2026-03-12 21:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:01.833447567 +0000 UTC m=+674.318716389" watchObservedRunningTime="2026-03-12 21:01:01.837332061 +0000 UTC m=+674.322600873" Mar 12 21:01:02.414135 master-0 kubenswrapper[7484]: I0312 21:01:02.414024 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:02.414135 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:02.414135 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:02.414135 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:02.415120 master-0 kubenswrapper[7484]: I0312 21:01:02.414210 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:02.945313 master-0 kubenswrapper[7484]: I0312 21:01:02.945242 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-tgbjx"] Mar 12 21:01:02.947090 master-0 kubenswrapper[7484]: I0312 21:01:02.947052 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:02.953185 master-0 kubenswrapper[7484]: I0312 21:01:02.953144 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-kj7kz" Mar 12 21:01:02.957176 master-0 kubenswrapper[7484]: I0312 21:01:02.957143 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-tgbjx"] Mar 12 21:01:03.000771 master-0 kubenswrapper[7484]: I0312 21:01:02.999655 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.000771 master-0 kubenswrapper[7484]: I0312 21:01:02.999797 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.101416 master-0 kubenswrapper[7484]: I0312 21:01:03.101359 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.101624 master-0 kubenswrapper[7484]: I0312 21:01:03.101438 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.104535 master-0 kubenswrapper[7484]: I0312 21:01:03.103994 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.118507 master-0 kubenswrapper[7484]: I0312 21:01:03.118475 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.288982 master-0 kubenswrapper[7484]: I0312 21:01:03.288845 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:01:03.415656 master-0 kubenswrapper[7484]: I0312 21:01:03.413639 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:03.415656 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:03.415656 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:03.415656 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:03.415656 master-0 kubenswrapper[7484]: I0312 21:01:03.413695 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:03.791617 master-0 kubenswrapper[7484]: I0312 21:01:03.791558 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-tgbjx"] Mar 12 21:01:03.806885 master-0 kubenswrapper[7484]: W0312 21:01:03.806843 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8aa8296_ed9b_4b37_8ab4_791b1342140f.slice/crio-4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662 WatchSource:0}: Error finding container 4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662: Status 404 returned error can't find the container with id 4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662 Mar 12 21:01:04.109338 master-0 kubenswrapper[7484]: E0312 21:01:04.109269 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:04.113363 master-0 kubenswrapper[7484]: E0312 21:01:04.113290 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:04.115774 master-0 kubenswrapper[7484]: E0312 21:01:04.115738 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:04.115931 master-0 kubenswrapper[7484]: E0312 21:01:04.115905 7484 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" Mar 12 21:01:04.413331 master-0 kubenswrapper[7484]: I0312 21:01:04.413279 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:04.413331 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:04.413331 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:04.413331 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:04.413656 master-0 kubenswrapper[7484]: I0312 21:01:04.413348 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:04.817361 master-0 kubenswrapper[7484]: I0312 21:01:04.817286 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" event={"ID":"b8aa8296-ed9b-4b37-8ab4-791b1342140f","Type":"ContainerStarted","Data":"0217824df4e2de4a6e66903135737bb67e2b0fdba4f510dd20fc536aefc8d881"} Mar 12 21:01:04.817361 master-0 kubenswrapper[7484]: I0312 21:01:04.817357 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" event={"ID":"b8aa8296-ed9b-4b37-8ab4-791b1342140f","Type":"ContainerStarted","Data":"0801412eec909b7451c3ea16fc183a3c0aa018264741173074d4a6d25bbb8e1c"} Mar 12 21:01:04.817701 master-0 kubenswrapper[7484]: I0312 21:01:04.817377 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" event={"ID":"b8aa8296-ed9b-4b37-8ab4-791b1342140f","Type":"ContainerStarted","Data":"4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662"} Mar 12 21:01:04.845187 master-0 kubenswrapper[7484]: I0312 21:01:04.845090 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" podStartSLOduration=2.845064876 podStartE2EDuration="2.845064876s" podCreationTimestamp="2026-03-12 21:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:04.843685283 +0000 UTC m=+677.328954115" watchObservedRunningTime="2026-03-12 21:01:04.845064876 +0000 UTC m=+677.330333718" Mar 12 21:01:04.888353 master-0 kubenswrapper[7484]: I0312 21:01:04.888274 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-98j9w"] Mar 12 21:01:04.888865 master-0 kubenswrapper[7484]: I0312 21:01:04.888786 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="multus-admission-controller" containerID="cri-o://f354e2ce5026487f56a9c2480c5f171a3fa137d3fef2ad82947d875089621462" gracePeriod=30 Mar 12 21:01:04.901894 master-0 kubenswrapper[7484]: I0312 21:01:04.890429 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="kube-rbac-proxy" containerID="cri-o://5d43c250b5491225f8ee7e26898d34d724cb99521d528bed5880450148f60c8b" gracePeriod=30 Mar 12 21:01:05.414867 master-0 kubenswrapper[7484]: I0312 21:01:05.414775 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:05.414867 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:05.414867 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:05.414867 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:05.415427 master-0 kubenswrapper[7484]: I0312 21:01:05.414885 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:05.760600 master-0 kubenswrapper[7484]: I0312 21:01:05.760456 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 21:01:05.762143 master-0 kubenswrapper[7484]: I0312 21:01:05.762088 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.764631 master-0 kubenswrapper[7484]: I0312 21:01:05.764573 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-bzr7w" Mar 12 21:01:05.765528 master-0 kubenswrapper[7484]: I0312 21:01:05.765489 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 12 21:01:05.783719 master-0 kubenswrapper[7484]: I0312 21:01:05.783664 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 21:01:05.826721 master-0 kubenswrapper[7484]: I0312 21:01:05.826654 7484 generic.go:334] "Generic (PLEG): container finished" podID="f8f4400c-474c-480f-b46c-cf7c80555004" containerID="5d43c250b5491225f8ee7e26898d34d724cb99521d528bed5880450148f60c8b" exitCode=0 Mar 12 21:01:05.826946 master-0 kubenswrapper[7484]: I0312 21:01:05.826744 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" event={"ID":"f8f4400c-474c-480f-b46c-cf7c80555004","Type":"ContainerDied","Data":"5d43c250b5491225f8ee7e26898d34d724cb99521d528bed5880450148f60c8b"} Mar 12 21:01:05.844756 master-0 kubenswrapper[7484]: I0312 21:01:05.843933 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237e5a97-fb81-4609-8538-c55a8e2db411-kube-api-access\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.844756 master-0 kubenswrapper[7484]: I0312 21:01:05.844372 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.844756 master-0 kubenswrapper[7484]: I0312 21:01:05.844646 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-var-lock\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.946073 master-0 kubenswrapper[7484]: I0312 21:01:05.946001 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237e5a97-fb81-4609-8538-c55a8e2db411-kube-api-access\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.946262 master-0 kubenswrapper[7484]: I0312 21:01:05.946106 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.946262 master-0 kubenswrapper[7484]: I0312 21:01:05.946171 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-var-lock\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.946327 master-0 kubenswrapper[7484]: I0312 21:01:05.946303 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-var-lock\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.948704 master-0 kubenswrapper[7484]: I0312 21:01:05.948663 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:05.979480 master-0 kubenswrapper[7484]: I0312 21:01:05.979418 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237e5a97-fb81-4609-8538-c55a8e2db411-kube-api-access\") pod \"installer-2-master-0\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:06.087113 master-0 kubenswrapper[7484]: I0312 21:01:06.087048 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:06.415595 master-0 kubenswrapper[7484]: I0312 21:01:06.415360 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:06.415595 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:06.415595 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:06.415595 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:06.416537 master-0 kubenswrapper[7484]: I0312 21:01:06.415586 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:06.560625 master-0 kubenswrapper[7484]: I0312 21:01:06.560555 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 12 21:01:06.574914 master-0 kubenswrapper[7484]: W0312 21:01:06.574833 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod237e5a97_fb81_4609_8538_c55a8e2db411.slice/crio-3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f WatchSource:0}: Error finding container 3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f: Status 404 returned error can't find the container with id 3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f Mar 12 21:01:06.835454 master-0 kubenswrapper[7484]: I0312 21:01:06.834764 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"237e5a97-fb81-4609-8538-c55a8e2db411","Type":"ContainerStarted","Data":"3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f"} Mar 12 21:01:07.414448 master-0 kubenswrapper[7484]: I0312 21:01:07.414280 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:07.414448 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:07.414448 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:07.414448 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:07.414448 master-0 kubenswrapper[7484]: I0312 21:01:07.414373 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:07.846096 master-0 kubenswrapper[7484]: I0312 21:01:07.845958 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"237e5a97-fb81-4609-8538-c55a8e2db411","Type":"ContainerStarted","Data":"9635b8a1063656701a872bccc0f8a9cd07d562ac36399e3e09153a9c74ff44b7"} Mar 12 21:01:07.871460 master-0 kubenswrapper[7484]: I0312 21:01:07.871325 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.871291427 podStartE2EDuration="2.871291427s" podCreationTimestamp="2026-03-12 21:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:07.869565715 +0000 UTC m=+680.354834597" watchObservedRunningTime="2026-03-12 21:01:07.871291427 +0000 UTC m=+680.356560299" Mar 12 21:01:08.415478 master-0 kubenswrapper[7484]: I0312 21:01:08.414713 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:08.415478 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:08.415478 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:08.415478 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:08.415478 master-0 kubenswrapper[7484]: I0312 21:01:08.414859 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:09.414921 master-0 kubenswrapper[7484]: I0312 21:01:09.414798 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:09.414921 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:09.414921 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:09.414921 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:09.416171 master-0 kubenswrapper[7484]: I0312 21:01:09.414928 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:10.414989 master-0 kubenswrapper[7484]: I0312 21:01:10.414901 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:10.414989 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:10.414989 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:10.414989 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:10.416286 master-0 kubenswrapper[7484]: I0312 21:01:10.415027 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:11.415047 master-0 kubenswrapper[7484]: I0312 21:01:11.414956 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:11.415047 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:11.415047 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:11.415047 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:11.416596 master-0 kubenswrapper[7484]: I0312 21:01:11.415053 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:12.413491 master-0 kubenswrapper[7484]: I0312 21:01:12.413431 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:12.413491 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:12.413491 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:12.413491 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:12.413866 master-0 kubenswrapper[7484]: I0312 21:01:12.413511 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:13.415042 master-0 kubenswrapper[7484]: I0312 21:01:13.414979 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:13.415042 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:13.415042 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:13.415042 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:13.416051 master-0 kubenswrapper[7484]: I0312 21:01:13.415056 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:14.109928 master-0 kubenswrapper[7484]: E0312 21:01:14.109804 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:14.112627 master-0 kubenswrapper[7484]: E0312 21:01:14.112543 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:14.114935 master-0 kubenswrapper[7484]: E0312 21:01:14.114791 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:14.115092 master-0 kubenswrapper[7484]: E0312 21:01:14.114939 7484 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" Mar 12 21:01:14.414688 master-0 kubenswrapper[7484]: I0312 21:01:14.414513 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:14.414688 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:14.414688 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:14.414688 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:14.414688 master-0 kubenswrapper[7484]: I0312 21:01:14.414638 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:15.414383 master-0 kubenswrapper[7484]: I0312 21:01:15.414280 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:15.414383 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:15.414383 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:15.414383 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:15.414383 master-0 kubenswrapper[7484]: I0312 21:01:15.414357 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:16.416261 master-0 kubenswrapper[7484]: I0312 21:01:16.416147 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:16.416261 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:16.416261 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:16.416261 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:16.416261 master-0 kubenswrapper[7484]: I0312 21:01:16.416248 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:17.414312 master-0 kubenswrapper[7484]: I0312 21:01:17.414183 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:17.414312 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:17.414312 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:17.414312 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:17.415033 master-0 kubenswrapper[7484]: I0312 21:01:17.414356 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:18.415237 master-0 kubenswrapper[7484]: I0312 21:01:18.415167 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:18.415237 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:18.415237 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:18.415237 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:18.416410 master-0 kubenswrapper[7484]: I0312 21:01:18.416356 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:19.415615 master-0 kubenswrapper[7484]: I0312 21:01:19.415188 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:19.415615 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:19.415615 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:19.415615 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:19.415615 master-0 kubenswrapper[7484]: I0312 21:01:19.415312 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:20.416230 master-0 kubenswrapper[7484]: I0312 21:01:20.416123 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:20.416230 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:20.416230 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:20.416230 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:20.417841 master-0 kubenswrapper[7484]: I0312 21:01:20.417384 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:21.418102 master-0 kubenswrapper[7484]: I0312 21:01:21.417995 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:21.418102 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:21.418102 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:21.418102 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:21.419215 master-0 kubenswrapper[7484]: I0312 21:01:21.418126 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:22.414652 master-0 kubenswrapper[7484]: I0312 21:01:22.414540 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:22.414652 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:22.414652 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:22.414652 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:22.415297 master-0 kubenswrapper[7484]: I0312 21:01:22.414649 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:23.415214 master-0 kubenswrapper[7484]: I0312 21:01:23.415120 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:23.415214 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:23.415214 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:23.415214 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:23.416537 master-0 kubenswrapper[7484]: I0312 21:01:23.415230 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:24.110947 master-0 kubenswrapper[7484]: E0312 21:01:24.110796 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:24.113153 master-0 kubenswrapper[7484]: E0312 21:01:24.113002 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:24.114763 master-0 kubenswrapper[7484]: E0312 21:01:24.114702 7484 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:01:24.114938 master-0 kubenswrapper[7484]: E0312 21:01:24.114766 7484 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" Mar 12 21:01:24.414747 master-0 kubenswrapper[7484]: I0312 21:01:24.414580 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:24.414747 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:24.414747 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:24.414747 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:24.414747 master-0 kubenswrapper[7484]: I0312 21:01:24.414671 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:24.551836 master-0 kubenswrapper[7484]: I0312 21:01:24.546160 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86"] Mar 12 21:01:24.551836 master-0 kubenswrapper[7484]: I0312 21:01:24.546586 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" containerID="cri-o://59b4ecaa3eedf20f90ff4f437a227a7eff0e617269f5faf6807fb533207b0134" gracePeriod=30 Mar 12 21:01:24.583833 master-0 kubenswrapper[7484]: I0312 21:01:24.581162 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs"] Mar 12 21:01:24.583833 master-0 kubenswrapper[7484]: I0312 21:01:24.581682 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" podUID="b6ab546f-a3fa-44dc-9c83-30a376880f14" containerName="route-controller-manager" containerID="cri-o://000152bdbaa6a39e3cd6f5ab2bc3ec2c13b858332e25d8ee0b163cf10cb5a429" gracePeriod=30 Mar 12 21:01:24.988350 master-0 kubenswrapper[7484]: I0312 21:01:24.988236 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" event={"ID":"6d28f095-032b-47d4-b808-1502deeffee5","Type":"ContainerDied","Data":"59b4ecaa3eedf20f90ff4f437a227a7eff0e617269f5faf6807fb533207b0134"} Mar 12 21:01:24.988350 master-0 kubenswrapper[7484]: I0312 21:01:24.988330 7484 scope.go:117] "RemoveContainer" containerID="90f6df2cd5378a3ebab865fb719c69e38e48496ca3cd635c80da9e8ec49ce434" Mar 12 21:01:24.988551 master-0 kubenswrapper[7484]: I0312 21:01:24.988216 7484 generic.go:334] "Generic (PLEG): container finished" podID="6d28f095-032b-47d4-b808-1502deeffee5" containerID="59b4ecaa3eedf20f90ff4f437a227a7eff0e617269f5faf6807fb533207b0134" exitCode=0 Mar 12 21:01:24.990526 master-0 kubenswrapper[7484]: I0312 21:01:24.990488 7484 generic.go:334] "Generic (PLEG): container finished" podID="b6ab546f-a3fa-44dc-9c83-30a376880f14" containerID="000152bdbaa6a39e3cd6f5ab2bc3ec2c13b858332e25d8ee0b163cf10cb5a429" exitCode=0 Mar 12 21:01:24.990655 master-0 kubenswrapper[7484]: I0312 21:01:24.990591 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" event={"ID":"b6ab546f-a3fa-44dc-9c83-30a376880f14","Type":"ContainerDied","Data":"000152bdbaa6a39e3cd6f5ab2bc3ec2c13b858332e25d8ee0b163cf10cb5a429"} Mar 12 21:01:25.026692 master-0 kubenswrapper[7484]: I0312 21:01:25.026656 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 21:01:25.082259 master-0 kubenswrapper[7484]: I0312 21:01:25.082184 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 21:01:25.176352 master-0 kubenswrapper[7484]: I0312 21:01:25.176272 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ab546f-a3fa-44dc-9c83-30a376880f14-serving-cert\") pod \"b6ab546f-a3fa-44dc-9c83-30a376880f14\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " Mar 12 21:01:25.176551 master-0 kubenswrapper[7484]: I0312 21:01:25.176367 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrjr\" (UniqueName: \"kubernetes.io/projected/b6ab546f-a3fa-44dc-9c83-30a376880f14-kube-api-access-gwrjr\") pod \"b6ab546f-a3fa-44dc-9c83-30a376880f14\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " Mar 12 21:01:25.176551 master-0 kubenswrapper[7484]: I0312 21:01:25.176501 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-config\") pod \"b6ab546f-a3fa-44dc-9c83-30a376880f14\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " Mar 12 21:01:25.176551 master-0 kubenswrapper[7484]: I0312 21:01:25.176534 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfkv8\" (UniqueName: \"kubernetes.io/projected/6d28f095-032b-47d4-b808-1502deeffee5-kube-api-access-bfkv8\") pod \"6d28f095-032b-47d4-b808-1502deeffee5\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " Mar 12 21:01:25.176680 master-0 kubenswrapper[7484]: I0312 21:01:25.176590 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-client-ca\") pod \"6d28f095-032b-47d4-b808-1502deeffee5\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " Mar 12 21:01:25.177137 master-0 kubenswrapper[7484]: I0312 21:01:25.177101 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-client-ca\") pod \"b6ab546f-a3fa-44dc-9c83-30a376880f14\" (UID: \"b6ab546f-a3fa-44dc-9c83-30a376880f14\") " Mar 12 21:01:25.177199 master-0 kubenswrapper[7484]: I0312 21:01:25.177162 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-client-ca" (OuterVolumeSpecName: "client-ca") pod "6d28f095-032b-47d4-b808-1502deeffee5" (UID: "6d28f095-032b-47d4-b808-1502deeffee5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:01:25.177747 master-0 kubenswrapper[7484]: I0312 21:01:25.177712 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.177830 master-0 kubenswrapper[7484]: I0312 21:01:25.177745 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6ab546f-a3fa-44dc-9c83-30a376880f14" (UID: "b6ab546f-a3fa-44dc-9c83-30a376880f14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:01:25.177885 master-0 kubenswrapper[7484]: I0312 21:01:25.177800 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-config" (OuterVolumeSpecName: "config") pod "b6ab546f-a3fa-44dc-9c83-30a376880f14" (UID: "b6ab546f-a3fa-44dc-9c83-30a376880f14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:01:25.179093 master-0 kubenswrapper[7484]: I0312 21:01:25.179045 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ab546f-a3fa-44dc-9c83-30a376880f14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6ab546f-a3fa-44dc-9c83-30a376880f14" (UID: "b6ab546f-a3fa-44dc-9c83-30a376880f14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:01:25.179777 master-0 kubenswrapper[7484]: I0312 21:01:25.179713 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ab546f-a3fa-44dc-9c83-30a376880f14-kube-api-access-gwrjr" (OuterVolumeSpecName: "kube-api-access-gwrjr") pod "b6ab546f-a3fa-44dc-9c83-30a376880f14" (UID: "b6ab546f-a3fa-44dc-9c83-30a376880f14"). InnerVolumeSpecName "kube-api-access-gwrjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:25.179940 master-0 kubenswrapper[7484]: I0312 21:01:25.179902 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d28f095-032b-47d4-b808-1502deeffee5-kube-api-access-bfkv8" (OuterVolumeSpecName: "kube-api-access-bfkv8") pod "6d28f095-032b-47d4-b808-1502deeffee5" (UID: "6d28f095-032b-47d4-b808-1502deeffee5"). InnerVolumeSpecName "kube-api-access-bfkv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:25.279021 master-0 kubenswrapper[7484]: I0312 21:01:25.278835 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-proxy-ca-bundles\") pod \"6d28f095-032b-47d4-b808-1502deeffee5\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " Mar 12 21:01:25.279249 master-0 kubenswrapper[7484]: I0312 21:01:25.279029 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-config\") pod \"6d28f095-032b-47d4-b808-1502deeffee5\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " Mar 12 21:01:25.279343 master-0 kubenswrapper[7484]: I0312 21:01:25.279262 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d28f095-032b-47d4-b808-1502deeffee5-serving-cert\") pod \"6d28f095-032b-47d4-b808-1502deeffee5\" (UID: \"6d28f095-032b-47d4-b808-1502deeffee5\") " Mar 12 21:01:25.279714 master-0 kubenswrapper[7484]: I0312 21:01:25.279634 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6d28f095-032b-47d4-b808-1502deeffee5" (UID: "6d28f095-032b-47d4-b808-1502deeffee5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:01:25.279714 master-0 kubenswrapper[7484]: I0312 21:01:25.279681 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ab546f-a3fa-44dc-9c83-30a376880f14-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.279918 master-0 kubenswrapper[7484]: I0312 21:01:25.279752 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrjr\" (UniqueName: \"kubernetes.io/projected/b6ab546f-a3fa-44dc-9c83-30a376880f14-kube-api-access-gwrjr\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.279918 master-0 kubenswrapper[7484]: I0312 21:01:25.279779 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.279918 master-0 kubenswrapper[7484]: I0312 21:01:25.279801 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfkv8\" (UniqueName: \"kubernetes.io/projected/6d28f095-032b-47d4-b808-1502deeffee5-kube-api-access-bfkv8\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.279918 master-0 kubenswrapper[7484]: I0312 21:01:25.279858 7484 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6ab546f-a3fa-44dc-9c83-30a376880f14-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.280324 master-0 kubenswrapper[7484]: I0312 21:01:25.280223 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-config" (OuterVolumeSpecName: "config") pod "6d28f095-032b-47d4-b808-1502deeffee5" (UID: "6d28f095-032b-47d4-b808-1502deeffee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:01:25.284624 master-0 kubenswrapper[7484]: I0312 21:01:25.284541 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d28f095-032b-47d4-b808-1502deeffee5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6d28f095-032b-47d4-b808-1502deeffee5" (UID: "6d28f095-032b-47d4-b808-1502deeffee5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:01:25.381402 master-0 kubenswrapper[7484]: I0312 21:01:25.381334 7484 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d28f095-032b-47d4-b808-1502deeffee5-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.381402 master-0 kubenswrapper[7484]: I0312 21:01:25.381394 7484 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.381402 master-0 kubenswrapper[7484]: I0312 21:01:25.381418 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d28f095-032b-47d4-b808-1502deeffee5-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:25.417876 master-0 kubenswrapper[7484]: I0312 21:01:25.417759 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:25.417876 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:25.417876 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:25.417876 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:25.418270 master-0 kubenswrapper[7484]: I0312 21:01:25.417918 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:26.002992 master-0 kubenswrapper[7484]: I0312 21:01:26.002760 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" event={"ID":"b6ab546f-a3fa-44dc-9c83-30a376880f14","Type":"ContainerDied","Data":"7829a5473bca9b592f3720bc91d73e59b3fdfa6a34f4ddae3d51a8c7d8ecc8ba"} Mar 12 21:01:26.003517 master-0 kubenswrapper[7484]: I0312 21:01:26.003359 7484 scope.go:117] "RemoveContainer" containerID="000152bdbaa6a39e3cd6f5ab2bc3ec2c13b858332e25d8ee0b163cf10cb5a429" Mar 12 21:01:26.003517 master-0 kubenswrapper[7484]: I0312 21:01:26.002975 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs" Mar 12 21:01:26.011893 master-0 kubenswrapper[7484]: I0312 21:01:26.008886 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" event={"ID":"6d28f095-032b-47d4-b808-1502deeffee5","Type":"ContainerDied","Data":"34eb9f39a103adc95e9d813da70dc873fef8ba0c9c9b46fb5eb1ecd38c9046cb"} Mar 12 21:01:26.011893 master-0 kubenswrapper[7484]: I0312 21:01:26.009049 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86" Mar 12 21:01:26.051774 master-0 kubenswrapper[7484]: I0312 21:01:26.051719 7484 scope.go:117] "RemoveContainer" containerID="59b4ecaa3eedf20f90ff4f437a227a7eff0e617269f5faf6807fb533207b0134" Mar 12 21:01:26.058512 master-0 kubenswrapper[7484]: I0312 21:01:26.058474 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs"] Mar 12 21:01:26.066990 master-0 kubenswrapper[7484]: I0312 21:01:26.066930 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657bd6d846-tffzs"] Mar 12 21:01:26.087349 master-0 kubenswrapper[7484]: I0312 21:01:26.087306 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86"] Mar 12 21:01:26.090537 master-0 kubenswrapper[7484]: I0312 21:01:26.090487 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dfdd9fb89-wjn86"] Mar 12 21:01:26.304954 master-0 kubenswrapper[7484]: I0312 21:01:26.304755 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-759579d7c9-wjl25"] Mar 12 21:01:26.305197 master-0 kubenswrapper[7484]: E0312 21:01:26.305046 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" Mar 12 21:01:26.305197 master-0 kubenswrapper[7484]: I0312 21:01:26.305064 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" Mar 12 21:01:26.305197 master-0 kubenswrapper[7484]: E0312 21:01:26.305076 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ab546f-a3fa-44dc-9c83-30a376880f14" containerName="route-controller-manager" Mar 12 21:01:26.305197 master-0 kubenswrapper[7484]: I0312 21:01:26.305083 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ab546f-a3fa-44dc-9c83-30a376880f14" containerName="route-controller-manager" Mar 12 21:01:26.305197 master-0 kubenswrapper[7484]: I0312 21:01:26.305196 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ab546f-a3fa-44dc-9c83-30a376880f14" containerName="route-controller-manager" Mar 12 21:01:26.305499 master-0 kubenswrapper[7484]: I0312 21:01:26.305209 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" Mar 12 21:01:26.305499 master-0 kubenswrapper[7484]: I0312 21:01:26.305220 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" Mar 12 21:01:26.305731 master-0 kubenswrapper[7484]: I0312 21:01:26.305699 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.308912 master-0 kubenswrapper[7484]: I0312 21:01:26.308839 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 21:01:26.309052 master-0 kubenswrapper[7484]: I0312 21:01:26.308968 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 21:01:26.310764 master-0 kubenswrapper[7484]: I0312 21:01:26.310706 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 21:01:26.310764 master-0 kubenswrapper[7484]: I0312 21:01:26.310740 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 21:01:26.310993 master-0 kubenswrapper[7484]: I0312 21:01:26.310756 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 21:01:26.312301 master-0 kubenswrapper[7484]: I0312 21:01:26.312267 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg"] Mar 12 21:01:26.312617 master-0 kubenswrapper[7484]: E0312 21:01:26.312586 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" Mar 12 21:01:26.312617 master-0 kubenswrapper[7484]: I0312 21:01:26.312605 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d28f095-032b-47d4-b808-1502deeffee5" containerName="controller-manager" Mar 12 21:01:26.313275 master-0 kubenswrapper[7484]: I0312 21:01:26.313239 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.314600 master-0 kubenswrapper[7484]: I0312 21:01:26.314560 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 21:01:26.317352 master-0 kubenswrapper[7484]: I0312 21:01:26.317301 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-f29rj" Mar 12 21:01:26.317480 master-0 kubenswrapper[7484]: I0312 21:01:26.317442 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-7gthf" Mar 12 21:01:26.317544 master-0 kubenswrapper[7484]: I0312 21:01:26.317492 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 21:01:26.317604 master-0 kubenswrapper[7484]: I0312 21:01:26.317581 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 21:01:26.317604 master-0 kubenswrapper[7484]: I0312 21:01:26.317589 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 21:01:26.318034 master-0 kubenswrapper[7484]: I0312 21:01:26.317992 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 21:01:26.333774 master-0 kubenswrapper[7484]: I0312 21:01:26.333701 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg"] Mar 12 21:01:26.336927 master-0 kubenswrapper[7484]: I0312 21:01:26.336871 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 21:01:26.343209 master-0 kubenswrapper[7484]: I0312 21:01:26.343150 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-759579d7c9-wjl25"] Mar 12 21:01:26.413929 master-0 kubenswrapper[7484]: I0312 21:01:26.413679 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:26.413929 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:26.413929 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:26.413929 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:26.413929 master-0 kubenswrapper[7484]: I0312 21:01:26.413886 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:26.498548 master-0 kubenswrapper[7484]: I0312 21:01:26.498464 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.498731 master-0 kubenswrapper[7484]: I0312 21:01:26.498559 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.498772 master-0 kubenswrapper[7484]: I0312 21:01:26.498713 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.498851 master-0 kubenswrapper[7484]: I0312 21:01:26.498781 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.498883 master-0 kubenswrapper[7484]: I0312 21:01:26.498866 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.499044 master-0 kubenswrapper[7484]: I0312 21:01:26.498998 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.499148 master-0 kubenswrapper[7484]: I0312 21:01:26.499117 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.499300 master-0 kubenswrapper[7484]: I0312 21:01:26.499275 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.499396 master-0 kubenswrapper[7484]: I0312 21:01:26.499355 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.601002 master-0 kubenswrapper[7484]: I0312 21:01:26.600889 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.601002 master-0 kubenswrapper[7484]: I0312 21:01:26.600977 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601116 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601168 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601198 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601258 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601283 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601317 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.601510 master-0 kubenswrapper[7484]: I0312 21:01:26.601341 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.602711 master-0 kubenswrapper[7484]: I0312 21:01:26.602649 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.603032 master-0 kubenswrapper[7484]: I0312 21:01:26.602962 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.603134 master-0 kubenswrapper[7484]: I0312 21:01:26.602998 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.603715 master-0 kubenswrapper[7484]: I0312 21:01:26.603658 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.604321 master-0 kubenswrapper[7484]: I0312 21:01:26.604263 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.605790 master-0 kubenswrapper[7484]: I0312 21:01:26.605715 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.611451 master-0 kubenswrapper[7484]: I0312 21:01:26.611274 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.628567 master-0 kubenswrapper[7484]: I0312 21:01:26.628469 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:26.637452 master-0 kubenswrapper[7484]: I0312 21:01:26.637345 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.643833 master-0 kubenswrapper[7484]: I0312 21:01:26.643740 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:26.672851 master-0 kubenswrapper[7484]: I0312 21:01:26.672774 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:27.121891 master-0 kubenswrapper[7484]: I0312 21:01:27.121769 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-759579d7c9-wjl25"] Mar 12 21:01:27.134323 master-0 kubenswrapper[7484]: W0312 21:01:27.134045 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb50a6106_1112_4a4b_b4ae_933879e12110.slice/crio-41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5 WatchSource:0}: Error finding container 41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5: Status 404 returned error can't find the container with id 41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5 Mar 12 21:01:27.183517 master-0 kubenswrapper[7484]: I0312 21:01:27.183464 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg"] Mar 12 21:01:27.220270 master-0 kubenswrapper[7484]: W0312 21:01:27.214945 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd850d441_7505_4e81_b4cf_6e7a9911ae35.slice/crio-b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354 WatchSource:0}: Error finding container b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354: Status 404 returned error can't find the container with id b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354 Mar 12 21:01:27.414987 master-0 kubenswrapper[7484]: I0312 21:01:27.414588 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:27.414987 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:27.414987 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:27.414987 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:27.414987 master-0 kubenswrapper[7484]: I0312 21:01:27.414648 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:27.743775 master-0 kubenswrapper[7484]: I0312 21:01:27.743662 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d28f095-032b-47d4-b808-1502deeffee5" path="/var/lib/kubelet/pods/6d28f095-032b-47d4-b808-1502deeffee5/volumes" Mar 12 21:01:27.744881 master-0 kubenswrapper[7484]: I0312 21:01:27.744499 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ab546f-a3fa-44dc-9c83-30a376880f14" path="/var/lib/kubelet/pods/b6ab546f-a3fa-44dc-9c83-30a376880f14/volumes" Mar 12 21:01:27.880971 master-0 kubenswrapper[7484]: I0312 21:01:27.880892 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kbdkh_cfba7834-c034-42c6-a0c2-cfba4a1b1baa/kube-multus-additional-cni-plugins/0.log" Mar 12 21:01:27.880971 master-0 kubenswrapper[7484]: I0312 21:01:27.880970 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:01:28.023666 master-0 kubenswrapper[7484]: I0312 21:01:28.023521 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-cni-sysctl-allowlist\") pod \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " Mar 12 21:01:28.023666 master-0 kubenswrapper[7484]: I0312 21:01:28.023607 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-ready\") pod \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " Mar 12 21:01:28.023666 master-0 kubenswrapper[7484]: I0312 21:01:28.023646 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-tuning-conf-dir\") pod \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " Mar 12 21:01:28.024063 master-0 kubenswrapper[7484]: I0312 21:01:28.023765 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxkfb\" (UniqueName: \"kubernetes.io/projected/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-kube-api-access-xxkfb\") pod \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\" (UID: \"cfba7834-c034-42c6-a0c2-cfba4a1b1baa\") " Mar 12 21:01:28.024261 master-0 kubenswrapper[7484]: I0312 21:01:28.024176 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-ready" (OuterVolumeSpecName: "ready") pod "cfba7834-c034-42c6-a0c2-cfba4a1b1baa" (UID: "cfba7834-c034-42c6-a0c2-cfba4a1b1baa"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:01:28.024366 master-0 kubenswrapper[7484]: I0312 21:01:28.024250 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "cfba7834-c034-42c6-a0c2-cfba4a1b1baa" (UID: "cfba7834-c034-42c6-a0c2-cfba4a1b1baa"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:01:28.024437 master-0 kubenswrapper[7484]: I0312 21:01:28.024379 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "cfba7834-c034-42c6-a0c2-cfba4a1b1baa" (UID: "cfba7834-c034-42c6-a0c2-cfba4a1b1baa"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:28.028168 master-0 kubenswrapper[7484]: I0312 21:01:28.028117 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-kube-api-access-xxkfb" (OuterVolumeSpecName: "kube-api-access-xxkfb") pod "cfba7834-c034-42c6-a0c2-cfba4a1b1baa" (UID: "cfba7834-c034-42c6-a0c2-cfba4a1b1baa"). InnerVolumeSpecName "kube-api-access-xxkfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:28.031998 master-0 kubenswrapper[7484]: I0312 21:01:28.031941 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" event={"ID":"b50a6106-1112-4a4b-b4ae-933879e12110","Type":"ContainerStarted","Data":"8dc00850a2298439a85382d76a3ffd123f490ec7c79324ad9a9c72fd9448c30b"} Mar 12 21:01:28.031998 master-0 kubenswrapper[7484]: I0312 21:01:28.031980 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" event={"ID":"b50a6106-1112-4a4b-b4ae-933879e12110","Type":"ContainerStarted","Data":"41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5"} Mar 12 21:01:28.033693 master-0 kubenswrapper[7484]: I0312 21:01:28.033640 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:28.037894 master-0 kubenswrapper[7484]: I0312 21:01:28.037833 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kbdkh_cfba7834-c034-42c6-a0c2-cfba4a1b1baa/kube-multus-additional-cni-plugins/0.log" Mar 12 21:01:28.038010 master-0 kubenswrapper[7484]: I0312 21:01:28.037916 7484 generic.go:334] "Generic (PLEG): container finished" podID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" exitCode=137 Mar 12 21:01:28.038095 master-0 kubenswrapper[7484]: I0312 21:01:28.038017 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" event={"ID":"cfba7834-c034-42c6-a0c2-cfba4a1b1baa","Type":"ContainerDied","Data":"7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b"} Mar 12 21:01:28.038095 master-0 kubenswrapper[7484]: I0312 21:01:28.038030 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" Mar 12 21:01:28.038095 master-0 kubenswrapper[7484]: I0312 21:01:28.038055 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kbdkh" event={"ID":"cfba7834-c034-42c6-a0c2-cfba4a1b1baa","Type":"ContainerDied","Data":"8eba3a05c5df91e4a5afa89f2996ec27fde1995b86b2affbd16eb620fb03627f"} Mar 12 21:01:28.038326 master-0 kubenswrapper[7484]: I0312 21:01:28.038099 7484 scope.go:117] "RemoveContainer" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" Mar 12 21:01:28.040338 master-0 kubenswrapper[7484]: I0312 21:01:28.040279 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:01:28.040676 master-0 kubenswrapper[7484]: I0312 21:01:28.040628 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" event={"ID":"d850d441-7505-4e81-b4cf-6e7a9911ae35","Type":"ContainerStarted","Data":"2c63b31786f77f93d95548b76a3537893d50bf158aa9c3612aab7c5b5e4a29b8"} Mar 12 21:01:28.040676 master-0 kubenswrapper[7484]: I0312 21:01:28.040658 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" event={"ID":"d850d441-7505-4e81-b4cf-6e7a9911ae35","Type":"ContainerStarted","Data":"b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354"} Mar 12 21:01:28.041392 master-0 kubenswrapper[7484]: I0312 21:01:28.041358 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:28.050245 master-0 kubenswrapper[7484]: I0312 21:01:28.050120 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:01:28.063358 master-0 kubenswrapper[7484]: I0312 21:01:28.063272 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" podStartSLOduration=4.063253156 podStartE2EDuration="4.063253156s" podCreationTimestamp="2026-03-12 21:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:28.059948566 +0000 UTC m=+700.545217398" watchObservedRunningTime="2026-03-12 21:01:28.063253156 +0000 UTC m=+700.548521958" Mar 12 21:01:28.086268 master-0 kubenswrapper[7484]: I0312 21:01:28.086095 7484 scope.go:117] "RemoveContainer" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" Mar 12 21:01:28.086825 master-0 kubenswrapper[7484]: E0312 21:01:28.086696 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b\": container with ID starting with 7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b not found: ID does not exist" containerID="7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b" Mar 12 21:01:28.086936 master-0 kubenswrapper[7484]: I0312 21:01:28.086785 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b"} err="failed to get container status \"7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b\": rpc error: code = NotFound desc = could not find container \"7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b\": container with ID starting with 7621f6c0c70b2026fefb33c1c14ed2808f2599f8d2724d54a6033b3fb8757b2b not found: ID does not exist" Mar 12 21:01:28.125323 master-0 kubenswrapper[7484]: I0312 21:01:28.125245 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxkfb\" (UniqueName: \"kubernetes.io/projected/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-kube-api-access-xxkfb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:28.125323 master-0 kubenswrapper[7484]: I0312 21:01:28.125312 7484 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:28.125323 master-0 kubenswrapper[7484]: I0312 21:01:28.125333 7484 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-ready\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:28.126376 master-0 kubenswrapper[7484]: I0312 21:01:28.125354 7484 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cfba7834-c034-42c6-a0c2-cfba4a1b1baa-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:28.149266 master-0 kubenswrapper[7484]: I0312 21:01:28.149172 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" podStartSLOduration=4.149146222 podStartE2EDuration="4.149146222s" podCreationTimestamp="2026-03-12 21:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:28.14112521 +0000 UTC m=+700.626394052" watchObservedRunningTime="2026-03-12 21:01:28.149146222 +0000 UTC m=+700.634415044" Mar 12 21:01:28.172600 master-0 kubenswrapper[7484]: I0312 21:01:28.171243 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kbdkh"] Mar 12 21:01:28.174886 master-0 kubenswrapper[7484]: I0312 21:01:28.174783 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kbdkh"] Mar 12 21:01:28.415235 master-0 kubenswrapper[7484]: I0312 21:01:28.415122 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:28.415235 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:28.415235 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:28.415235 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:28.415788 master-0 kubenswrapper[7484]: I0312 21:01:28.415282 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:29.414523 master-0 kubenswrapper[7484]: I0312 21:01:29.414416 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:29.414523 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:29.414523 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:29.414523 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:29.416022 master-0 kubenswrapper[7484]: I0312 21:01:29.414549 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:29.751130 master-0 kubenswrapper[7484]: I0312 21:01:29.750690 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" path="/var/lib/kubelet/pods/cfba7834-c034-42c6-a0c2-cfba4a1b1baa/volumes" Mar 12 21:01:30.415967 master-0 kubenswrapper[7484]: I0312 21:01:30.415793 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:30.415967 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:30.415967 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:30.415967 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:30.416998 master-0 kubenswrapper[7484]: I0312 21:01:30.415988 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:31.414701 master-0 kubenswrapper[7484]: I0312 21:01:31.414593 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:31.414701 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:31.414701 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:31.414701 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:31.415171 master-0 kubenswrapper[7484]: I0312 21:01:31.414730 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:32.413856 master-0 kubenswrapper[7484]: I0312 21:01:32.413752 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:32.413856 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:32.413856 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:32.413856 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:32.415319 master-0 kubenswrapper[7484]: I0312 21:01:32.415218 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:32.951696 master-0 kubenswrapper[7484]: I0312 21:01:32.951574 7484 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 21:01:32.952040 master-0 kubenswrapper[7484]: I0312 21:01:32.951856 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://bb2ea5b36a5078a0f6bfe1f1daf8d78310cc27ab4b84afa4566e18c230d38fb8" gracePeriod=30 Mar 12 21:01:32.957256 master-0 kubenswrapper[7484]: I0312 21:01:32.955798 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 21:01:32.957256 master-0 kubenswrapper[7484]: E0312 21:01:32.956304 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 21:01:32.957256 master-0 kubenswrapper[7484]: I0312 21:01:32.956338 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 21:01:32.957256 master-0 kubenswrapper[7484]: E0312 21:01:32.956363 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" Mar 12 21:01:32.957256 master-0 kubenswrapper[7484]: I0312 21:01:32.957230 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" Mar 12 21:01:32.958957 master-0 kubenswrapper[7484]: E0312 21:01:32.957278 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 21:01:32.958957 master-0 kubenswrapper[7484]: I0312 21:01:32.957297 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 21:01:32.958957 master-0 kubenswrapper[7484]: I0312 21:01:32.957569 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 21:01:32.958957 master-0 kubenswrapper[7484]: I0312 21:01:32.957602 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfba7834-c034-42c6-a0c2-cfba4a1b1baa" containerName="kube-multus-additional-cni-plugins" Mar 12 21:01:32.958957 master-0 kubenswrapper[7484]: I0312 21:01:32.958196 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 12 21:01:32.961360 master-0 kubenswrapper[7484]: I0312 21:01:32.960879 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.082865 master-0 kubenswrapper[7484]: I0312 21:01:33.082741 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 21:01:33.091308 master-0 kubenswrapper[7484]: I0312 21:01:33.091241 7484 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="bb2ea5b36a5078a0f6bfe1f1daf8d78310cc27ab4b84afa4566e18c230d38fb8" exitCode=0 Mar 12 21:01:33.091308 master-0 kubenswrapper[7484]: I0312 21:01:33.091304 7484 scope.go:117] "RemoveContainer" containerID="dc7d8b29ebb567785e771d22b9996a6a97141570cdafc6702bfef40b35ac45e8" Mar 12 21:01:33.109731 master-0 kubenswrapper[7484]: I0312 21:01:33.109661 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.109992 master-0 kubenswrapper[7484]: I0312 21:01:33.109937 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.118292 master-0 kubenswrapper[7484]: I0312 21:01:33.118230 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 21:01:33.170905 master-0 kubenswrapper[7484]: I0312 21:01:33.170773 7484 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e079b0a4-274f-4e25-9dca-48e63d6c4aff" Mar 12 21:01:33.211250 master-0 kubenswrapper[7484]: I0312 21:01:33.211085 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.211454 master-0 kubenswrapper[7484]: I0312 21:01:33.211239 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.211454 master-0 kubenswrapper[7484]: I0312 21:01:33.211313 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.211454 master-0 kubenswrapper[7484]: I0312 21:01:33.211267 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.312266 master-0 kubenswrapper[7484]: I0312 21:01:33.312167 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 12 21:01:33.312266 master-0 kubenswrapper[7484]: I0312 21:01:33.312222 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 12 21:01:33.312614 master-0 kubenswrapper[7484]: I0312 21:01:33.312306 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:33.312614 master-0 kubenswrapper[7484]: I0312 21:01:33.312487 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:33.313182 master-0 kubenswrapper[7484]: I0312 21:01:33.313141 7484 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:33.313182 master-0 kubenswrapper[7484]: I0312 21:01:33.313164 7484 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:33.376467 master-0 kubenswrapper[7484]: I0312 21:01:33.376379 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:33.414259 master-0 kubenswrapper[7484]: W0312 21:01:33.414201 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1453f6461bf5d599ad65a4656343ee91.slice/crio-6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395 WatchSource:0}: Error finding container 6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395: Status 404 returned error can't find the container with id 6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395 Mar 12 21:01:33.415503 master-0 kubenswrapper[7484]: I0312 21:01:33.415453 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:33.415503 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:33.415503 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:33.415503 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:33.415503 master-0 kubenswrapper[7484]: I0312 21:01:33.415497 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:33.748679 master-0 kubenswrapper[7484]: I0312 21:01:33.748611 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 12 21:01:33.749014 master-0 kubenswrapper[7484]: I0312 21:01:33.748973 7484 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 12 21:01:33.774326 master-0 kubenswrapper[7484]: I0312 21:01:33.774266 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 21:01:33.774577 master-0 kubenswrapper[7484]: I0312 21:01:33.774316 7484 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e079b0a4-274f-4e25-9dca-48e63d6c4aff" Mar 12 21:01:33.779964 master-0 kubenswrapper[7484]: I0312 21:01:33.779927 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 12 21:01:33.780155 master-0 kubenswrapper[7484]: I0312 21:01:33.780128 7484 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e079b0a4-274f-4e25-9dca-48e63d6c4aff" Mar 12 21:01:33.962444 master-0 kubenswrapper[7484]: I0312 21:01:33.962264 7484 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:01:33.962765 master-0 kubenswrapper[7484]: I0312 21:01:33.962701 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager" containerID="cri-o://4f6de2cd5a1fff08ef55af61c8bc016882b96a14bcce20fcbe68fbc0199f304d" gracePeriod=30 Mar 12 21:01:33.962974 master-0 kubenswrapper[7484]: I0312 21:01:33.962844 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://3903035b9e73b841d666d6fc139bd62b961c60d2e83441c115f7bd868868c079" gracePeriod=30 Mar 12 21:01:33.963063 master-0 kubenswrapper[7484]: I0312 21:01:33.962958 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="cluster-policy-controller" containerID="cri-o://41b66431878d44ab858bd298f2664ca1044c24d2683709493ac4eda068452880" gracePeriod=30 Mar 12 21:01:33.963063 master-0 kubenswrapper[7484]: I0312 21:01:33.962900 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://7f2dec97dd1ce529f99f40df66e2e92b6d6da2e679bbce21a7eba2d896a0203a" gracePeriod=30 Mar 12 21:01:33.964323 master-0 kubenswrapper[7484]: I0312 21:01:33.964242 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:01:33.964613 master-0 kubenswrapper[7484]: E0312 21:01:33.964589 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-recovery-controller" Mar 12 21:01:33.964613 master-0 kubenswrapper[7484]: I0312 21:01:33.964611 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-recovery-controller" Mar 12 21:01:33.964782 master-0 kubenswrapper[7484]: E0312 21:01:33.964650 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-cert-syncer" Mar 12 21:01:33.964782 master-0 kubenswrapper[7484]: I0312 21:01:33.964664 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-cert-syncer" Mar 12 21:01:33.964782 master-0 kubenswrapper[7484]: E0312 21:01:33.964684 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="cluster-policy-controller" Mar 12 21:01:33.964782 master-0 kubenswrapper[7484]: I0312 21:01:33.964696 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="cluster-policy-controller" Mar 12 21:01:33.964782 master-0 kubenswrapper[7484]: E0312 21:01:33.964716 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager" Mar 12 21:01:33.964782 master-0 kubenswrapper[7484]: I0312 21:01:33.964728 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager" Mar 12 21:01:33.965421 master-0 kubenswrapper[7484]: I0312 21:01:33.964949 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="cluster-policy-controller" Mar 12 21:01:33.965421 master-0 kubenswrapper[7484]: I0312 21:01:33.964970 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-recovery-controller" Mar 12 21:01:33.965593 master-0 kubenswrapper[7484]: I0312 21:01:33.964993 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager-cert-syncer" Mar 12 21:01:33.965705 master-0 kubenswrapper[7484]: I0312 21:01:33.965579 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" containerName="kube-controller-manager" Mar 12 21:01:34.099776 master-0 kubenswrapper[7484]: I0312 21:01:34.099702 7484 generic.go:334] "Generic (PLEG): container finished" podID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerID="607e25a8dd52c1bd5d656d7e56ad63215f5d6ac7b9578ad98c15a18a5607da53" exitCode=0 Mar 12 21:01:34.100024 master-0 kubenswrapper[7484]: I0312 21:01:34.099794 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5d919d0a-f152-43da-aec3-080812c0d2d6","Type":"ContainerDied","Data":"607e25a8dd52c1bd5d656d7e56ad63215f5d6ac7b9578ad98c15a18a5607da53"} Mar 12 21:01:34.101880 master-0 kubenswrapper[7484]: I0312 21:01:34.101831 7484 scope.go:117] "RemoveContainer" containerID="bb2ea5b36a5078a0f6bfe1f1daf8d78310cc27ab4b84afa4566e18c230d38fb8" Mar 12 21:01:34.101977 master-0 kubenswrapper[7484]: I0312 21:01:34.101837 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 12 21:01:34.104759 master-0 kubenswrapper[7484]: I0312 21:01:34.104657 7484 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7d54a9c5cfaefbffe1b215272d01bc0c" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:01:34.105460 master-0 kubenswrapper[7484]: I0312 21:01:34.105401 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/kube-controller-manager-cert-syncer/0.log" Mar 12 21:01:34.107675 master-0 kubenswrapper[7484]: I0312 21:01:34.107596 7484 generic.go:334] "Generic (PLEG): container finished" podID="7d54a9c5cfaefbffe1b215272d01bc0c" containerID="3903035b9e73b841d666d6fc139bd62b961c60d2e83441c115f7bd868868c079" exitCode=0 Mar 12 21:01:34.107675 master-0 kubenswrapper[7484]: I0312 21:01:34.107658 7484 generic.go:334] "Generic (PLEG): container finished" podID="7d54a9c5cfaefbffe1b215272d01bc0c" containerID="7f2dec97dd1ce529f99f40df66e2e92b6d6da2e679bbce21a7eba2d896a0203a" exitCode=2 Mar 12 21:01:34.108002 master-0 kubenswrapper[7484]: I0312 21:01:34.107687 7484 generic.go:334] "Generic (PLEG): container finished" podID="7d54a9c5cfaefbffe1b215272d01bc0c" containerID="41b66431878d44ab858bd298f2664ca1044c24d2683709493ac4eda068452880" exitCode=0 Mar 12 21:01:34.112160 master-0 kubenswrapper[7484]: I0312 21:01:34.111497 7484 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="960bfa0d0eebfdde5dda543dfe04a76816e7b84b67e487e2787a47f72cbbf5a5" exitCode=0 Mar 12 21:01:34.112160 master-0 kubenswrapper[7484]: I0312 21:01:34.111576 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"960bfa0d0eebfdde5dda543dfe04a76816e7b84b67e487e2787a47f72cbbf5a5"} Mar 12 21:01:34.112160 master-0 kubenswrapper[7484]: I0312 21:01:34.111627 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395"} Mar 12 21:01:34.124154 master-0 kubenswrapper[7484]: I0312 21:01:34.124081 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.124478 master-0 kubenswrapper[7484]: I0312 21:01:34.124254 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.225584 master-0 kubenswrapper[7484]: I0312 21:01:34.225459 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.225752 master-0 kubenswrapper[7484]: I0312 21:01:34.225619 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.225752 master-0 kubenswrapper[7484]: I0312 21:01:34.225679 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.225872 master-0 kubenswrapper[7484]: I0312 21:01:34.225750 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.253532 master-0 kubenswrapper[7484]: I0312 21:01:34.253388 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/kube-controller-manager-cert-syncer/0.log" Mar 12 21:01:34.254297 master-0 kubenswrapper[7484]: I0312 21:01:34.254258 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:34.259676 master-0 kubenswrapper[7484]: I0312 21:01:34.258457 7484 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7d54a9c5cfaefbffe1b215272d01bc0c" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:01:34.414833 master-0 kubenswrapper[7484]: I0312 21:01:34.414721 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:34.414833 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:34.414833 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:34.414833 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:34.416133 master-0 kubenswrapper[7484]: I0312 21:01:34.414898 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:34.433063 master-0 kubenswrapper[7484]: I0312 21:01:34.432985 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-cert-dir\") pod \"7d54a9c5cfaefbffe1b215272d01bc0c\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " Mar 12 21:01:34.433254 master-0 kubenswrapper[7484]: I0312 21:01:34.433138 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-resource-dir\") pod \"7d54a9c5cfaefbffe1b215272d01bc0c\" (UID: \"7d54a9c5cfaefbffe1b215272d01bc0c\") " Mar 12 21:01:34.433254 master-0 kubenswrapper[7484]: I0312 21:01:34.433200 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7d54a9c5cfaefbffe1b215272d01bc0c" (UID: "7d54a9c5cfaefbffe1b215272d01bc0c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:34.433580 master-0 kubenswrapper[7484]: I0312 21:01:34.433331 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7d54a9c5cfaefbffe1b215272d01bc0c" (UID: "7d54a9c5cfaefbffe1b215272d01bc0c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:34.433800 master-0 kubenswrapper[7484]: I0312 21:01:34.433752 7484 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:34.433890 master-0 kubenswrapper[7484]: I0312 21:01:34.433794 7484 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d54a9c5cfaefbffe1b215272d01bc0c-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:35.123696 master-0 kubenswrapper[7484]: I0312 21:01:35.123562 7484 generic.go:334] "Generic (PLEG): container finished" podID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerID="99189d1662670a8accfafb7d98b62dd2bd3324bd586c75f160c786893e14a45b" exitCode=0 Mar 12 21:01:35.123696 master-0 kubenswrapper[7484]: I0312 21:01:35.123668 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0c6afe7e-de9d-41d3-8e34-9523a46da697","Type":"ContainerDied","Data":"99189d1662670a8accfafb7d98b62dd2bd3324bd586c75f160c786893e14a45b"} Mar 12 21:01:35.127253 master-0 kubenswrapper[7484]: I0312 21:01:35.127216 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"a96c0be5068b40870e476008e5515f8b602a69ab55e721b1f3a3f75a76b3a98f"} Mar 12 21:01:35.127312 master-0 kubenswrapper[7484]: I0312 21:01:35.127261 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"fd67aa7de049fcfa1b2eebc98d90103ccc7e8a5a9b9e08168649d625c912f99e"} Mar 12 21:01:35.127312 master-0 kubenswrapper[7484]: I0312 21:01:35.127281 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"30bd0d1ae984ab9c16e404ca61f305cdc008b61e24e3fa41bdfaeaa497182321"} Mar 12 21:01:35.128274 master-0 kubenswrapper[7484]: I0312 21:01:35.128247 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:01:35.130544 master-0 kubenswrapper[7484]: I0312 21:01:35.130496 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-98j9w_f8f4400c-474c-480f-b46c-cf7c80555004/multus-admission-controller/0.log" Mar 12 21:01:35.130705 master-0 kubenswrapper[7484]: I0312 21:01:35.130600 7484 generic.go:334] "Generic (PLEG): container finished" podID="f8f4400c-474c-480f-b46c-cf7c80555004" containerID="f354e2ce5026487f56a9c2480c5f171a3fa137d3fef2ad82947d875089621462" exitCode=137 Mar 12 21:01:35.130758 master-0 kubenswrapper[7484]: I0312 21:01:35.130709 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" event={"ID":"f8f4400c-474c-480f-b46c-cf7c80555004","Type":"ContainerDied","Data":"f354e2ce5026487f56a9c2480c5f171a3fa137d3fef2ad82947d875089621462"} Mar 12 21:01:35.137485 master-0 kubenswrapper[7484]: I0312 21:01:35.137446 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7d54a9c5cfaefbffe1b215272d01bc0c/kube-controller-manager-cert-syncer/0.log" Mar 12 21:01:35.138999 master-0 kubenswrapper[7484]: I0312 21:01:35.138963 7484 generic.go:334] "Generic (PLEG): container finished" podID="7d54a9c5cfaefbffe1b215272d01bc0c" containerID="4f6de2cd5a1fff08ef55af61c8bc016882b96a14bcce20fcbe68fbc0199f304d" exitCode=0 Mar 12 21:01:35.139259 master-0 kubenswrapper[7484]: I0312 21:01:35.139231 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:35.141383 master-0 kubenswrapper[7484]: I0312 21:01:35.141343 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f365a407143b07d7ab3bf3145491c06b19450d422583608ac9a40200009f40fa" Mar 12 21:01:35.164833 master-0 kubenswrapper[7484]: I0312 21:01:35.164375 7484 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7d54a9c5cfaefbffe1b215272d01bc0c" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:01:35.193469 master-0 kubenswrapper[7484]: I0312 21:01:35.193362 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.193332832 podStartE2EDuration="2.193332832s" podCreationTimestamp="2026-03-12 21:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:01:35.186142209 +0000 UTC m=+707.671411071" watchObservedRunningTime="2026-03-12 21:01:35.193332832 +0000 UTC m=+707.678601664" Mar 12 21:01:35.222065 master-0 kubenswrapper[7484]: I0312 21:01:35.221801 7484 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7d54a9c5cfaefbffe1b215272d01bc0c" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:01:35.414203 master-0 kubenswrapper[7484]: I0312 21:01:35.414046 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:35.414203 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:35.414203 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:35.414203 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:35.414203 master-0 kubenswrapper[7484]: I0312 21:01:35.414104 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:35.503857 master-0 kubenswrapper[7484]: I0312 21:01:35.503786 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:35.664509 master-0 kubenswrapper[7484]: I0312 21:01:35.663913 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-var-lock\") pod \"5d919d0a-f152-43da-aec3-080812c0d2d6\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " Mar 12 21:01:35.664509 master-0 kubenswrapper[7484]: I0312 21:01:35.664068 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-var-lock" (OuterVolumeSpecName: "var-lock") pod "5d919d0a-f152-43da-aec3-080812c0d2d6" (UID: "5d919d0a-f152-43da-aec3-080812c0d2d6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:35.664509 master-0 kubenswrapper[7484]: I0312 21:01:35.664095 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-kubelet-dir\") pod \"5d919d0a-f152-43da-aec3-080812c0d2d6\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " Mar 12 21:01:35.664509 master-0 kubenswrapper[7484]: I0312 21:01:35.664176 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d919d0a-f152-43da-aec3-080812c0d2d6-kube-api-access\") pod \"5d919d0a-f152-43da-aec3-080812c0d2d6\" (UID: \"5d919d0a-f152-43da-aec3-080812c0d2d6\") " Mar 12 21:01:35.664509 master-0 kubenswrapper[7484]: I0312 21:01:35.664235 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5d919d0a-f152-43da-aec3-080812c0d2d6" (UID: "5d919d0a-f152-43da-aec3-080812c0d2d6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:35.664915 master-0 kubenswrapper[7484]: I0312 21:01:35.664571 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:35.664915 master-0 kubenswrapper[7484]: I0312 21:01:35.664597 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d919d0a-f152-43da-aec3-080812c0d2d6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:35.669514 master-0 kubenswrapper[7484]: I0312 21:01:35.669476 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d919d0a-f152-43da-aec3-080812c0d2d6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5d919d0a-f152-43da-aec3-080812c0d2d6" (UID: "5d919d0a-f152-43da-aec3-080812c0d2d6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:35.702367 master-0 kubenswrapper[7484]: I0312 21:01:35.702299 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-98j9w_f8f4400c-474c-480f-b46c-cf7c80555004/multus-admission-controller/0.log" Mar 12 21:01:35.702655 master-0 kubenswrapper[7484]: I0312 21:01:35.702406 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 21:01:35.746176 master-0 kubenswrapper[7484]: I0312 21:01:35.746088 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d54a9c5cfaefbffe1b215272d01bc0c" path="/var/lib/kubelet/pods/7d54a9c5cfaefbffe1b215272d01bc0c/volumes" Mar 12 21:01:35.765575 master-0 kubenswrapper[7484]: I0312 21:01:35.765501 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d919d0a-f152-43da-aec3-080812c0d2d6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:35.867214 master-0 kubenswrapper[7484]: I0312 21:01:35.867129 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") pod \"f8f4400c-474c-480f-b46c-cf7c80555004\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " Mar 12 21:01:35.867530 master-0 kubenswrapper[7484]: I0312 21:01:35.867262 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") pod \"f8f4400c-474c-480f-b46c-cf7c80555004\" (UID: \"f8f4400c-474c-480f-b46c-cf7c80555004\") " Mar 12 21:01:35.872008 master-0 kubenswrapper[7484]: I0312 21:01:35.871925 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "f8f4400c-474c-480f-b46c-cf7c80555004" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:01:35.872724 master-0 kubenswrapper[7484]: I0312 21:01:35.872648 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f" (OuterVolumeSpecName: "kube-api-access-vjh5f") pod "f8f4400c-474c-480f-b46c-cf7c80555004" (UID: "f8f4400c-474c-480f-b46c-cf7c80555004"). InnerVolumeSpecName "kube-api-access-vjh5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:35.969460 master-0 kubenswrapper[7484]: I0312 21:01:35.969303 7484 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f8f4400c-474c-480f-b46c-cf7c80555004-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:35.969460 master-0 kubenswrapper[7484]: I0312 21:01:35.969358 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjh5f\" (UniqueName: \"kubernetes.io/projected/f8f4400c-474c-480f-b46c-cf7c80555004-kube-api-access-vjh5f\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:36.150629 master-0 kubenswrapper[7484]: I0312 21:01:36.150537 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-98j9w_f8f4400c-474c-480f-b46c-cf7c80555004/multus-admission-controller/0.log" Mar 12 21:01:36.150978 master-0 kubenswrapper[7484]: I0312 21:01:36.150708 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" Mar 12 21:01:36.151191 master-0 kubenswrapper[7484]: I0312 21:01:36.151085 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-98j9w" event={"ID":"f8f4400c-474c-480f-b46c-cf7c80555004","Type":"ContainerDied","Data":"6f74a5945277c25b1d774a22e71b44578b23381c826557245d1753c0354bdea6"} Mar 12 21:01:36.151336 master-0 kubenswrapper[7484]: I0312 21:01:36.151202 7484 scope.go:117] "RemoveContainer" containerID="5d43c250b5491225f8ee7e26898d34d724cb99521d528bed5880450148f60c8b" Mar 12 21:01:36.156081 master-0 kubenswrapper[7484]: I0312 21:01:36.155999 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5d919d0a-f152-43da-aec3-080812c0d2d6","Type":"ContainerDied","Data":"ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe"} Mar 12 21:01:36.156081 master-0 kubenswrapper[7484]: I0312 21:01:36.156064 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe" Mar 12 21:01:36.156523 master-0 kubenswrapper[7484]: I0312 21:01:36.156435 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:01:36.180491 master-0 kubenswrapper[7484]: I0312 21:01:36.180398 7484 scope.go:117] "RemoveContainer" containerID="f354e2ce5026487f56a9c2480c5f171a3fa137d3fef2ad82947d875089621462" Mar 12 21:01:36.220956 master-0 kubenswrapper[7484]: I0312 21:01:36.220781 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-98j9w"] Mar 12 21:01:36.227263 master-0 kubenswrapper[7484]: I0312 21:01:36.227200 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-98j9w"] Mar 12 21:01:36.416054 master-0 kubenswrapper[7484]: I0312 21:01:36.415975 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:36.416054 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:36.416054 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:36.416054 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:36.416474 master-0 kubenswrapper[7484]: I0312 21:01:36.416064 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:36.554568 master-0 kubenswrapper[7484]: I0312 21:01:36.554494 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:01:36.579657 master-0 kubenswrapper[7484]: I0312 21:01:36.579569 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-var-lock\") pod \"0c6afe7e-de9d-41d3-8e34-9523a46da697\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " Mar 12 21:01:36.579951 master-0 kubenswrapper[7484]: I0312 21:01:36.579914 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-var-lock" (OuterVolumeSpecName: "var-lock") pod "0c6afe7e-de9d-41d3-8e34-9523a46da697" (UID: "0c6afe7e-de9d-41d3-8e34-9523a46da697"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:36.680787 master-0 kubenswrapper[7484]: I0312 21:01:36.680695 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-kubelet-dir\") pod \"0c6afe7e-de9d-41d3-8e34-9523a46da697\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " Mar 12 21:01:36.681127 master-0 kubenswrapper[7484]: I0312 21:01:36.680878 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0c6afe7e-de9d-41d3-8e34-9523a46da697" (UID: "0c6afe7e-de9d-41d3-8e34-9523a46da697"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:36.681127 master-0 kubenswrapper[7484]: I0312 21:01:36.680898 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6afe7e-de9d-41d3-8e34-9523a46da697-kube-api-access\") pod \"0c6afe7e-de9d-41d3-8e34-9523a46da697\" (UID: \"0c6afe7e-de9d-41d3-8e34-9523a46da697\") " Mar 12 21:01:36.681536 master-0 kubenswrapper[7484]: I0312 21:01:36.681478 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:36.681536 master-0 kubenswrapper[7484]: I0312 21:01:36.681508 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0c6afe7e-de9d-41d3-8e34-9523a46da697-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:36.685484 master-0 kubenswrapper[7484]: I0312 21:01:36.685419 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6afe7e-de9d-41d3-8e34-9523a46da697-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0c6afe7e-de9d-41d3-8e34-9523a46da697" (UID: "0c6afe7e-de9d-41d3-8e34-9523a46da697"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:36.786043 master-0 kubenswrapper[7484]: I0312 21:01:36.783094 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c6afe7e-de9d-41d3-8e34-9523a46da697-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:37.180581 master-0 kubenswrapper[7484]: I0312 21:01:37.180497 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"0c6afe7e-de9d-41d3-8e34-9523a46da697","Type":"ContainerDied","Data":"28c9b7d298a5e9f87b7b79f9bc1b7d09be186a38e9c6487e815fa087b10965ba"} Mar 12 21:01:37.180581 master-0 kubenswrapper[7484]: I0312 21:01:37.180551 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c9b7d298a5e9f87b7b79f9bc1b7d09be186a38e9c6487e815fa087b10965ba" Mar 12 21:01:37.180892 master-0 kubenswrapper[7484]: I0312 21:01:37.180667 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:01:37.414309 master-0 kubenswrapper[7484]: I0312 21:01:37.414131 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:37.414309 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:37.414309 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:37.414309 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:37.414797 master-0 kubenswrapper[7484]: I0312 21:01:37.414347 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:37.747379 master-0 kubenswrapper[7484]: I0312 21:01:37.747324 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" path="/var/lib/kubelet/pods/f8f4400c-474c-480f-b46c-cf7c80555004/volumes" Mar 12 21:01:38.241388 master-0 kubenswrapper[7484]: I0312 21:01:38.241297 7484 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 21:01:38.241942 master-0 kubenswrapper[7484]: I0312 21:01:38.241888 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://908a8cc2f3bc351202dab9b410d70888335d0f357ad01e6cdd7f4cdf90adf703" gracePeriod=30 Mar 12 21:01:38.242055 master-0 kubenswrapper[7484]: I0312 21:01:38.241993 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://d69ef5a9682c286db49162800e6bbc8a372fbb8bc9c781af56f0f61a5109903e" gracePeriod=30 Mar 12 21:01:38.242129 master-0 kubenswrapper[7484]: I0312 21:01:38.242047 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://7fd269d6a8eb44e1a4790cb72966b4a0534f7af1aa471591ccb71a946b3ca40d" gracePeriod=30 Mar 12 21:01:38.242129 master-0 kubenswrapper[7484]: I0312 21:01:38.242117 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://f73db7800402cb358e0d79e90095c60120f55db64b8d66594c7d386be4916a3c" gracePeriod=30 Mar 12 21:01:38.242260 master-0 kubenswrapper[7484]: I0312 21:01:38.242170 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://0b1ad30ea0b6c41c6f1eb7bd3de3eda3e9f404e7c25c08138d7b4b1893fec5eb" gracePeriod=30 Mar 12 21:01:38.246278 master-0 kubenswrapper[7484]: I0312 21:01:38.246226 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 12 21:01:38.249778 master-0 kubenswrapper[7484]: E0312 21:01:38.246996 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 21:01:38.249778 master-0 kubenswrapper[7484]: I0312 21:01:38.247036 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 21:01:38.249778 master-0 kubenswrapper[7484]: E0312 21:01:38.247060 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 12 21:01:38.249778 master-0 kubenswrapper[7484]: I0312 21:01:38.247078 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 12 21:01:38.265044 master-0 kubenswrapper[7484]: E0312 21:01:38.264978 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="kube-rbac-proxy" Mar 12 21:01:38.265044 master-0 kubenswrapper[7484]: I0312 21:01:38.265032 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="kube-rbac-proxy" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265062 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="multus-admission-controller" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265075 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="multus-admission-controller" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265109 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265123 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265143 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerName="installer" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265156 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerName="installer" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265180 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerName="installer" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265192 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerName="installer" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265216 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265229 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265252 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265264 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265283 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265296 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265317 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265332 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: E0312 21:01:38.265347 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 12 21:01:38.265347 master-0 kubenswrapper[7484]: I0312 21:01:38.265360 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265644 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="multus-admission-controller" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265675 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265691 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265706 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265722 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerName="installer" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265751 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerName="installer" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265768 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f4400c-474c-480f-b46c-cf7c80555004" containerName="kube-rbac-proxy" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265785 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 12 21:01:38.266449 master-0 kubenswrapper[7484]: I0312 21:01:38.265802 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 12 21:01:38.407194 master-0 kubenswrapper[7484]: I0312 21:01:38.407084 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.407393 master-0 kubenswrapper[7484]: I0312 21:01:38.407212 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.407393 master-0 kubenswrapper[7484]: I0312 21:01:38.407289 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.407604 master-0 kubenswrapper[7484]: I0312 21:01:38.407440 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.407604 master-0 kubenswrapper[7484]: I0312 21:01:38.407515 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.407604 master-0 kubenswrapper[7484]: I0312 21:01:38.407538 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.415790 master-0 kubenswrapper[7484]: I0312 21:01:38.415719 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:38.415790 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:38.415790 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:38.415790 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:38.416103 master-0 kubenswrapper[7484]: I0312 21:01:38.415839 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:38.508681 master-0 kubenswrapper[7484]: I0312 21:01:38.508504 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.508681 master-0 kubenswrapper[7484]: I0312 21:01:38.508642 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.508972 master-0 kubenswrapper[7484]: I0312 21:01:38.508689 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.508972 master-0 kubenswrapper[7484]: I0312 21:01:38.508709 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.508972 master-0 kubenswrapper[7484]: I0312 21:01:38.508774 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.508972 master-0 kubenswrapper[7484]: I0312 21:01:38.508902 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.509130 master-0 kubenswrapper[7484]: I0312 21:01:38.508978 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.509130 master-0 kubenswrapper[7484]: I0312 21:01:38.508937 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.509130 master-0 kubenswrapper[7484]: I0312 21:01:38.509083 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.509240 master-0 kubenswrapper[7484]: I0312 21:01:38.509150 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.509240 master-0 kubenswrapper[7484]: I0312 21:01:38.509211 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:38.509318 master-0 kubenswrapper[7484]: I0312 21:01:38.509239 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:01:39.206486 master-0 kubenswrapper[7484]: I0312 21:01:39.206391 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 21:01:39.208080 master-0 kubenswrapper[7484]: I0312 21:01:39.208022 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 21:01:39.211308 master-0 kubenswrapper[7484]: I0312 21:01:39.211240 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="d69ef5a9682c286db49162800e6bbc8a372fbb8bc9c781af56f0f61a5109903e" exitCode=2 Mar 12 21:01:39.211308 master-0 kubenswrapper[7484]: I0312 21:01:39.211291 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="7fd269d6a8eb44e1a4790cb72966b4a0534f7af1aa471591ccb71a946b3ca40d" exitCode=0 Mar 12 21:01:39.211489 master-0 kubenswrapper[7484]: I0312 21:01:39.211313 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="f73db7800402cb358e0d79e90095c60120f55db64b8d66594c7d386be4916a3c" exitCode=2 Mar 12 21:01:39.414841 master-0 kubenswrapper[7484]: I0312 21:01:39.414685 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:39.414841 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:39.414841 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:39.414841 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:39.414841 master-0 kubenswrapper[7484]: I0312 21:01:39.414791 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:40.415367 master-0 kubenswrapper[7484]: I0312 21:01:40.415226 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:40.415367 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:40.415367 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:40.415367 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:40.416896 master-0 kubenswrapper[7484]: I0312 21:01:40.415381 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:41.415082 master-0 kubenswrapper[7484]: I0312 21:01:41.414962 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:41.415082 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:41.415082 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:41.415082 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:41.415082 master-0 kubenswrapper[7484]: I0312 21:01:41.415079 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:42.414658 master-0 kubenswrapper[7484]: I0312 21:01:42.414571 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:42.414658 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:42.414658 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:42.414658 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:42.415226 master-0 kubenswrapper[7484]: I0312 21:01:42.414663 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:43.414163 master-0 kubenswrapper[7484]: I0312 21:01:43.414093 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:43.414163 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:43.414163 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:43.414163 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:43.415574 master-0 kubenswrapper[7484]: I0312 21:01:43.414175 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:44.414207 master-0 kubenswrapper[7484]: I0312 21:01:44.414130 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:44.414207 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:44.414207 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:44.414207 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:44.415187 master-0 kubenswrapper[7484]: I0312 21:01:44.414219 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:45.414060 master-0 kubenswrapper[7484]: I0312 21:01:45.413986 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:45.414060 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:45.414060 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:45.414060 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:45.415022 master-0 kubenswrapper[7484]: I0312 21:01:45.414076 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:45.738378 master-0 kubenswrapper[7484]: I0312 21:01:45.738221 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:01:45.766778 master-0 kubenswrapper[7484]: I0312 21:01:45.766700 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:01:45.766778 master-0 kubenswrapper[7484]: I0312 21:01:45.766761 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:01:46.414830 master-0 kubenswrapper[7484]: I0312 21:01:46.414702 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:46.414830 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:46.414830 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:46.414830 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:46.415991 master-0 kubenswrapper[7484]: I0312 21:01:46.414843 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:47.414662 master-0 kubenswrapper[7484]: I0312 21:01:47.414550 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:47.414662 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:47.414662 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:47.414662 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:47.415749 master-0 kubenswrapper[7484]: I0312 21:01:47.414667 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:48.414660 master-0 kubenswrapper[7484]: I0312 21:01:48.414574 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:48.414660 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:48.414660 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:48.414660 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:48.415676 master-0 kubenswrapper[7484]: I0312 21:01:48.414661 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:49.415098 master-0 kubenswrapper[7484]: I0312 21:01:49.414985 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:49.415098 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:49.415098 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:49.415098 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:49.415769 master-0 kubenswrapper[7484]: I0312 21:01:49.415180 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:50.414636 master-0 kubenswrapper[7484]: I0312 21:01:50.414553 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:50.414636 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:50.414636 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:50.414636 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:50.414636 master-0 kubenswrapper[7484]: I0312 21:01:50.414657 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:51.416900 master-0 kubenswrapper[7484]: I0312 21:01:51.416789 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:51.416900 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:51.416900 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:51.416900 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:51.418023 master-0 kubenswrapper[7484]: I0312 21:01:51.416912 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:52.414481 master-0 kubenswrapper[7484]: I0312 21:01:52.414406 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:52.414481 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:52.414481 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:52.414481 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:52.414792 master-0 kubenswrapper[7484]: I0312 21:01:52.414501 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:53.333096 master-0 kubenswrapper[7484]: I0312 21:01:53.333002 7484 generic.go:334] "Generic (PLEG): container finished" podID="237e5a97-fb81-4609-8538-c55a8e2db411" containerID="9635b8a1063656701a872bccc0f8a9cd07d562ac36399e3e09153a9c74ff44b7" exitCode=0 Mar 12 21:01:53.333096 master-0 kubenswrapper[7484]: I0312 21:01:53.333073 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"237e5a97-fb81-4609-8538-c55a8e2db411","Type":"ContainerDied","Data":"9635b8a1063656701a872bccc0f8a9cd07d562ac36399e3e09153a9c74ff44b7"} Mar 12 21:01:53.415036 master-0 kubenswrapper[7484]: I0312 21:01:53.414916 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:53.415036 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:53.415036 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:53.415036 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:53.415491 master-0 kubenswrapper[7484]: I0312 21:01:53.415041 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:54.415044 master-0 kubenswrapper[7484]: I0312 21:01:54.414472 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:54.415044 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:54.415044 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:54.415044 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:54.416017 master-0 kubenswrapper[7484]: I0312 21:01:54.414544 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:54.756988 master-0 kubenswrapper[7484]: I0312 21:01:54.756895 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:54.764344 master-0 kubenswrapper[7484]: I0312 21:01:54.764301 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-var-lock\") pod \"237e5a97-fb81-4609-8538-c55a8e2db411\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " Mar 12 21:01:54.764472 master-0 kubenswrapper[7484]: I0312 21:01:54.764401 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-var-lock" (OuterVolumeSpecName: "var-lock") pod "237e5a97-fb81-4609-8538-c55a8e2db411" (UID: "237e5a97-fb81-4609-8538-c55a8e2db411"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:54.764472 master-0 kubenswrapper[7484]: I0312 21:01:54.764425 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-kubelet-dir\") pod \"237e5a97-fb81-4609-8538-c55a8e2db411\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " Mar 12 21:01:54.764714 master-0 kubenswrapper[7484]: I0312 21:01:54.764475 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "237e5a97-fb81-4609-8538-c55a8e2db411" (UID: "237e5a97-fb81-4609-8538-c55a8e2db411"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:01:54.764714 master-0 kubenswrapper[7484]: I0312 21:01:54.764592 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237e5a97-fb81-4609-8538-c55a8e2db411-kube-api-access\") pod \"237e5a97-fb81-4609-8538-c55a8e2db411\" (UID: \"237e5a97-fb81-4609-8538-c55a8e2db411\") " Mar 12 21:01:54.765462 master-0 kubenswrapper[7484]: I0312 21:01:54.765417 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:54.765462 master-0 kubenswrapper[7484]: I0312 21:01:54.765458 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237e5a97-fb81-4609-8538-c55a8e2db411-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:54.768861 master-0 kubenswrapper[7484]: I0312 21:01:54.768791 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237e5a97-fb81-4609-8538-c55a8e2db411-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "237e5a97-fb81-4609-8538-c55a8e2db411" (UID: "237e5a97-fb81-4609-8538-c55a8e2db411"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:01:54.866830 master-0 kubenswrapper[7484]: I0312 21:01:54.866760 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237e5a97-fb81-4609-8538-c55a8e2db411-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:01:55.352858 master-0 kubenswrapper[7484]: I0312 21:01:55.352745 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"237e5a97-fb81-4609-8538-c55a8e2db411","Type":"ContainerDied","Data":"3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f"} Mar 12 21:01:55.352858 master-0 kubenswrapper[7484]: I0312 21:01:55.352842 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f" Mar 12 21:01:55.353218 master-0 kubenswrapper[7484]: I0312 21:01:55.352843 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 21:01:55.414568 master-0 kubenswrapper[7484]: I0312 21:01:55.414507 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:55.414568 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:55.414568 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:55.414568 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:55.415042 master-0 kubenswrapper[7484]: I0312 21:01:55.414582 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:55.972183 master-0 kubenswrapper[7484]: E0312 21:01:55.971726 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 12 21:01:56.414310 master-0 kubenswrapper[7484]: I0312 21:01:56.414229 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:56.414310 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:56.414310 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:56.414310 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:56.414705 master-0 kubenswrapper[7484]: I0312 21:01:56.414311 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:57.414335 master-0 kubenswrapper[7484]: I0312 21:01:57.414227 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:57.414335 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:57.414335 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:57.414335 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:57.415583 master-0 kubenswrapper[7484]: I0312 21:01:57.414353 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:58.414726 master-0 kubenswrapper[7484]: I0312 21:01:58.414608 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:58.414726 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:58.414726 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:58.414726 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:58.414726 master-0 kubenswrapper[7484]: I0312 21:01:58.414709 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:59.415319 master-0 kubenswrapper[7484]: I0312 21:01:59.415193 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:01:59.415319 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:01:59.415319 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:01:59.415319 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:01:59.416424 master-0 kubenswrapper[7484]: I0312 21:01:59.415319 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:01:59.416424 master-0 kubenswrapper[7484]: I0312 21:01:59.415409 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:01:59.416618 master-0 kubenswrapper[7484]: I0312 21:01:59.416543 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"1acfa9d2750b23b6fbd73dc65a33ac93a90684811b79c1a559d68754a4e63f2b"} pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" containerMessage="Container router failed startup probe, will be restarted" Mar 12 21:01:59.416705 master-0 kubenswrapper[7484]: I0312 21:01:59.416632 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" containerID="cri-o://1acfa9d2750b23b6fbd73dc65a33ac93a90684811b79c1a559d68754a4e63f2b" gracePeriod=3600 Mar 12 21:02:05.972929 master-0 kubenswrapper[7484]: E0312 21:02:05.972802 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:08.468195 master-0 kubenswrapper[7484]: I0312 21:02:08.468149 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 21:02:08.470635 master-0 kubenswrapper[7484]: I0312 21:02:08.470582 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 21:02:08.471694 master-0 kubenswrapper[7484]: I0312 21:02:08.471643 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 21:02:08.472449 master-0 kubenswrapper[7484]: I0312 21:02:08.472403 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 21:02:08.474410 master-0 kubenswrapper[7484]: I0312 21:02:08.474349 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="0b1ad30ea0b6c41c6f1eb7bd3de3eda3e9f404e7c25c08138d7b4b1893fec5eb" exitCode=137 Mar 12 21:02:08.474410 master-0 kubenswrapper[7484]: I0312 21:02:08.474400 7484 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="908a8cc2f3bc351202dab9b410d70888335d0f357ad01e6cdd7f4cdf90adf703" exitCode=137 Mar 12 21:02:08.853065 master-0 kubenswrapper[7484]: I0312 21:02:08.852994 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 21:02:08.854492 master-0 kubenswrapper[7484]: I0312 21:02:08.854437 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 21:02:08.855522 master-0 kubenswrapper[7484]: I0312 21:02:08.855476 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 21:02:08.856235 master-0 kubenswrapper[7484]: I0312 21:02:08.856192 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 21:02:08.858067 master-0 kubenswrapper[7484]: I0312 21:02:08.858032 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038117 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038188 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038268 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038304 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038339 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038343 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038402 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 21:02:09.038545 master-0 kubenswrapper[7484]: I0312 21:02:09.038445 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.038621 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.038734 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.038663 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.038793 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.039170 7484 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.039222 7484 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.039242 7484 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.039263 7484 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.039281 7484 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 12 21:02:09.039556 master-0 kubenswrapper[7484]: I0312 21:02:09.039300 7484 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:02:09.485156 master-0 kubenswrapper[7484]: I0312 21:02:09.485061 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 12 21:02:09.486895 master-0 kubenswrapper[7484]: I0312 21:02:09.486854 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 12 21:02:09.488136 master-0 kubenswrapper[7484]: I0312 21:02:09.488069 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 12 21:02:09.488859 master-0 kubenswrapper[7484]: I0312 21:02:09.488769 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 12 21:02:09.491273 master-0 kubenswrapper[7484]: I0312 21:02:09.491212 7484 scope.go:117] "RemoveContainer" containerID="d69ef5a9682c286db49162800e6bbc8a372fbb8bc9c781af56f0f61a5109903e" Mar 12 21:02:09.491419 master-0 kubenswrapper[7484]: I0312 21:02:09.491356 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 21:02:09.522253 master-0 kubenswrapper[7484]: I0312 21:02:09.522178 7484 scope.go:117] "RemoveContainer" containerID="7fd269d6a8eb44e1a4790cb72966b4a0534f7af1aa471591ccb71a946b3ca40d" Mar 12 21:02:09.546659 master-0 kubenswrapper[7484]: I0312 21:02:09.546597 7484 scope.go:117] "RemoveContainer" containerID="f73db7800402cb358e0d79e90095c60120f55db64b8d66594c7d386be4916a3c" Mar 12 21:02:09.573762 master-0 kubenswrapper[7484]: I0312 21:02:09.573684 7484 scope.go:117] "RemoveContainer" containerID="0b1ad30ea0b6c41c6f1eb7bd3de3eda3e9f404e7c25c08138d7b4b1893fec5eb" Mar 12 21:02:09.601092 master-0 kubenswrapper[7484]: I0312 21:02:09.600987 7484 scope.go:117] "RemoveContainer" containerID="908a8cc2f3bc351202dab9b410d70888335d0f357ad01e6cdd7f4cdf90adf703" Mar 12 21:02:09.622345 master-0 kubenswrapper[7484]: I0312 21:02:09.622288 7484 scope.go:117] "RemoveContainer" containerID="d87061e77c3511fa3d10d439abd7fc19b87e09c759be9ed2d0d6d0851d1c2c5d" Mar 12 21:02:09.638064 master-0 kubenswrapper[7484]: I0312 21:02:09.638018 7484 scope.go:117] "RemoveContainer" containerID="48a904da460444c368cf9e0843bf61f533eb8193bac37e0aa7187d1bff30096d" Mar 12 21:02:09.657219 master-0 kubenswrapper[7484]: I0312 21:02:09.657152 7484 scope.go:117] "RemoveContainer" containerID="23a10404655a12ee18bb39608a6172dc4a604cc5b8d5ad95a794929465208396" Mar 12 21:02:09.744711 master-0 kubenswrapper[7484]: I0312 21:02:09.744579 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 12 21:02:12.267694 master-0 kubenswrapper[7484]: E0312 21:02:12.267375 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c33ca886660aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:01:38.241953962 +0000 UTC m=+710.727222784,LastTimestamp:2026-03-12 21:01:38.241953962 +0000 UTC m=+710.727222784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:02:14.733428 master-0 kubenswrapper[7484]: I0312 21:02:14.733295 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 21:02:14.765246 master-0 kubenswrapper[7484]: I0312 21:02:14.765178 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:02:14.765246 master-0 kubenswrapper[7484]: I0312 21:02:14.765229 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:02:15.973695 master-0 kubenswrapper[7484]: E0312 21:02:15.973592 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:19.769685 master-0 kubenswrapper[7484]: E0312 21:02:19.769396 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:19.770671 master-0 kubenswrapper[7484]: I0312 21:02:19.770410 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:19.807440 master-0 kubenswrapper[7484]: W0312 21:02:19.807358 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7678a2e61b792fe3be55b1c6f67b2aa2.slice/crio-bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6 WatchSource:0}: Error finding container bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6: Status 404 returned error can't find the container with id bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6 Mar 12 21:02:20.584462 master-0 kubenswrapper[7484]: I0312 21:02:20.584318 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"ea71fe537bf33cf42ac5188e76585186bcdbc69589a2a47aa52fa489a1cbc62e"} Mar 12 21:02:20.584462 master-0 kubenswrapper[7484]: I0312 21:02:20.584391 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"d3c7faffe68717f40a0072b4ab6a64ec7cccad22e04a4674b15d395e19ec5ebe"} Mar 12 21:02:20.584462 master-0 kubenswrapper[7484]: I0312 21:02:20.584411 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6"} Mar 12 21:02:21.598889 master-0 kubenswrapper[7484]: I0312 21:02:21.598751 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"aadc37b9873c997339d04dc5e3aaeecb47d5f57228484f7cca80ac879f4002d2"} Mar 12 21:02:21.598889 master-0 kubenswrapper[7484]: I0312 21:02:21.598855 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"1d02987cfd443da7225f0df6b3ab9f45e0b88c2171ab5627f4e3845fc50178ec"} Mar 12 21:02:21.600053 master-0 kubenswrapper[7484]: I0312 21:02:21.599236 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:02:21.600053 master-0 kubenswrapper[7484]: I0312 21:02:21.599280 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:02:23.389893 master-0 kubenswrapper[7484]: I0312 21:02:23.389733 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:02:25.975402 master-0 kubenswrapper[7484]: E0312 21:02:25.975260 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:29.771109 master-0 kubenswrapper[7484]: I0312 21:02:29.771000 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:29.771109 master-0 kubenswrapper[7484]: I0312 21:02:29.771099 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:29.771712 master-0 kubenswrapper[7484]: I0312 21:02:29.771127 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:29.771712 master-0 kubenswrapper[7484]: I0312 21:02:29.771151 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:29.774759 master-0 kubenswrapper[7484]: I0312 21:02:29.774714 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:31.678247 master-0 kubenswrapper[7484]: I0312 21:02:31.678158 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/1.log" Mar 12 21:02:31.679323 master-0 kubenswrapper[7484]: I0312 21:02:31.678876 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/0.log" Mar 12 21:02:31.679323 master-0 kubenswrapper[7484]: I0312 21:02:31.679305 7484 generic.go:334] "Generic (PLEG): container finished" podID="426efd5c-69e1-43e5-835a-6e1c4ef85720" containerID="26bae4b1151179f8943350ed41cce4211f30fc7d0bc576d35eb657f821dc0907" exitCode=1 Mar 12 21:02:31.679466 master-0 kubenswrapper[7484]: I0312 21:02:31.679350 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerDied","Data":"26bae4b1151179f8943350ed41cce4211f30fc7d0bc576d35eb657f821dc0907"} Mar 12 21:02:31.679466 master-0 kubenswrapper[7484]: I0312 21:02:31.679427 7484 scope.go:117] "RemoveContainer" containerID="28c691afcb8a45cb348e1216142781244b93a45eaf7cbab2716a18bf342b0dc8" Mar 12 21:02:31.680430 master-0 kubenswrapper[7484]: I0312 21:02:31.680369 7484 scope.go:117] "RemoveContainer" containerID="26bae4b1151179f8943350ed41cce4211f30fc7d0bc576d35eb657f821dc0907" Mar 12 21:02:31.680832 master-0 kubenswrapper[7484]: E0312 21:02:31.680746 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-48hk7_openshift-network-node-identity(426efd5c-69e1-43e5-835a-6e1c4ef85720)\"" pod="openshift-network-node-identity/network-node-identity-48hk7" podUID="426efd5c-69e1-43e5-835a-6e1c4ef85720" Mar 12 21:02:32.689796 master-0 kubenswrapper[7484]: I0312 21:02:32.689664 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/1.log" Mar 12 21:02:32.771660 master-0 kubenswrapper[7484]: I0312 21:02:32.771557 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:02:32.771946 master-0 kubenswrapper[7484]: I0312 21:02:32.771688 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:35.976135 master-0 kubenswrapper[7484]: E0312 21:02:35.976079 7484 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:35.976969 master-0 kubenswrapper[7484]: I0312 21:02:35.976937 7484 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 21:02:39.777963 master-0 kubenswrapper[7484]: I0312 21:02:39.777890 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:42.772245 master-0 kubenswrapper[7484]: I0312 21:02:42.772144 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:02:42.772245 master-0 kubenswrapper[7484]: I0312 21:02:42.772231 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:43.792978 master-0 kubenswrapper[7484]: E0312 21:02:43.792424 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:02:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:02:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:02:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:02:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:44.733994 master-0 kubenswrapper[7484]: I0312 21:02:44.733930 7484 scope.go:117] "RemoveContainer" containerID="26bae4b1151179f8943350ed41cce4211f30fc7d0bc576d35eb657f821dc0907" Mar 12 21:02:45.802771 master-0 kubenswrapper[7484]: I0312 21:02:45.802699 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/1.log" Mar 12 21:02:45.804304 master-0 kubenswrapper[7484]: I0312 21:02:45.804258 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-48hk7" event={"ID":"426efd5c-69e1-43e5-835a-6e1c4ef85720","Type":"ContainerStarted","Data":"1a7186290d13048f9c2dcb52409b7ecf5f3aaeb9fd732ac4375e487e70721cbd"} Mar 12 21:02:45.808839 master-0 kubenswrapper[7484]: I0312 21:02:45.808777 7484 generic.go:334] "Generic (PLEG): container finished" podID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerID="1acfa9d2750b23b6fbd73dc65a33ac93a90684811b79c1a559d68754a4e63f2b" exitCode=0 Mar 12 21:02:45.809102 master-0 kubenswrapper[7484]: I0312 21:02:45.808925 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerDied","Data":"1acfa9d2750b23b6fbd73dc65a33ac93a90684811b79c1a559d68754a4e63f2b"} Mar 12 21:02:45.809213 master-0 kubenswrapper[7484]: I0312 21:02:45.809149 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerStarted","Data":"91d2028136276069b3430f01cdedfd621a7ff241728670fbdc4cdf16424e1832"} Mar 12 21:02:45.809213 master-0 kubenswrapper[7484]: I0312 21:02:45.809196 7484 scope.go:117] "RemoveContainer" containerID="41145e0fa78e157774eb7d7a70c1dca5f300d506a37a6e9227272112a6ab2153" Mar 12 21:02:45.978255 master-0 kubenswrapper[7484]: E0312 21:02:45.978154 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: E0312 21:02:46.271065 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: &Event{ObjectMeta:{router-default-79f8cd6fdd-hsv57.189c338d1b282cad openshift-ingress 11088 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-hsv57,UID:a3828a1d-8180-4c7b-b423-4488f7fc0b76,APIVersion:v1,ResourceVersion:10611,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: body: [-]backend-http failed: reason withheld Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 20:57:14 +0000 UTC,LastTimestamp:2026-03-12 21:01:38.415781016 +0000 UTC m=+710.901049858,Count:219,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 12 21:02:46.271283 master-0 kubenswrapper[7484]: > Mar 12 21:02:46.411678 master-0 kubenswrapper[7484]: I0312 21:02:46.411576 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:02:46.415487 master-0 kubenswrapper[7484]: I0312 21:02:46.415289 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:46.415487 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:46.415487 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:46.415487 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:46.415487 master-0 kubenswrapper[7484]: I0312 21:02:46.415393 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:47.414542 master-0 kubenswrapper[7484]: I0312 21:02:47.414438 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:47.414542 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:47.414542 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:47.414542 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:47.415648 master-0 kubenswrapper[7484]: I0312 21:02:47.414554 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:47.744384 master-0 kubenswrapper[7484]: I0312 21:02:47.744143 7484 status_manager.go:851] "Failed to get status for pod" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 12 21:02:48.131341 master-0 kubenswrapper[7484]: I0312 21:02:48.131274 7484 scope.go:117] "RemoveContainer" containerID="41b66431878d44ab858bd298f2664ca1044c24d2683709493ac4eda068452880" Mar 12 21:02:48.153558 master-0 kubenswrapper[7484]: I0312 21:02:48.153506 7484 scope.go:117] "RemoveContainer" containerID="3903035b9e73b841d666d6fc139bd62b961c60d2e83441c115f7bd868868c079" Mar 12 21:02:48.174449 master-0 kubenswrapper[7484]: I0312 21:02:48.174345 7484 scope.go:117] "RemoveContainer" containerID="7f2dec97dd1ce529f99f40df66e2e92b6d6da2e679bbce21a7eba2d896a0203a" Mar 12 21:02:48.195306 master-0 kubenswrapper[7484]: I0312 21:02:48.195252 7484 scope.go:117] "RemoveContainer" containerID="4f6de2cd5a1fff08ef55af61c8bc016882b96a14bcce20fcbe68fbc0199f304d" Mar 12 21:02:48.415532 master-0 kubenswrapper[7484]: I0312 21:02:48.415321 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:48.415532 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:48.415532 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:48.415532 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:48.415532 master-0 kubenswrapper[7484]: I0312 21:02:48.415418 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:48.768670 master-0 kubenswrapper[7484]: E0312 21:02:48.768438 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 21:02:48.769442 master-0 kubenswrapper[7484]: I0312 21:02:48.769382 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 12 21:02:48.804462 master-0 kubenswrapper[7484]: W0312 21:02:48.804351 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e WatchSource:0}: Error finding container 6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e: Status 404 returned error can't find the container with id 6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e Mar 12 21:02:48.841904 master-0 kubenswrapper[7484]: I0312 21:02:48.841782 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e"} Mar 12 21:02:49.415685 master-0 kubenswrapper[7484]: I0312 21:02:49.415596 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:49.415685 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:49.415685 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:49.415685 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:49.416671 master-0 kubenswrapper[7484]: I0312 21:02:49.415697 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:49.856003 master-0 kubenswrapper[7484]: I0312 21:02:49.855305 7484 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="6505ef13a4bc86d0ecb1621927f731e78b211dc76a1d482556926db3756019bd" exitCode=0 Mar 12 21:02:49.856003 master-0 kubenswrapper[7484]: I0312 21:02:49.855379 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"6505ef13a4bc86d0ecb1621927f731e78b211dc76a1d482556926db3756019bd"} Mar 12 21:02:49.856003 master-0 kubenswrapper[7484]: I0312 21:02:49.855766 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:02:49.856003 master-0 kubenswrapper[7484]: I0312 21:02:49.855797 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:02:50.411340 master-0 kubenswrapper[7484]: I0312 21:02:50.411214 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:02:50.414883 master-0 kubenswrapper[7484]: I0312 21:02:50.414770 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:50.414883 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:50.414883 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:50.414883 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:50.415349 master-0 kubenswrapper[7484]: I0312 21:02:50.414907 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:50.851425 master-0 kubenswrapper[7484]: I0312 21:02:50.851350 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:50508->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 12 21:02:50.852428 master-0 kubenswrapper[7484]: I0312 21:02:50.851434 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:50508->127.0.0.1:10357: read: connection reset by peer" Mar 12 21:02:50.852428 master-0 kubenswrapper[7484]: I0312 21:02:50.851502 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:51.416254 master-0 kubenswrapper[7484]: I0312 21:02:51.416124 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:51.416254 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:51.416254 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:51.416254 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:51.416900 master-0 kubenswrapper[7484]: I0312 21:02:51.416261 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:51.879603 master-0 kubenswrapper[7484]: I0312 21:02:51.879527 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/0.log" Mar 12 21:02:51.880739 master-0 kubenswrapper[7484]: I0312 21:02:51.880070 7484 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="ea71fe537bf33cf42ac5188e76585186bcdbc69589a2a47aa52fa489a1cbc62e" exitCode=255 Mar 12 21:02:51.880739 master-0 kubenswrapper[7484]: I0312 21:02:51.880142 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"ea71fe537bf33cf42ac5188e76585186bcdbc69589a2a47aa52fa489a1cbc62e"} Mar 12 21:02:52.415117 master-0 kubenswrapper[7484]: I0312 21:02:52.415021 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:52.415117 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:52.415117 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:52.415117 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:52.415737 master-0 kubenswrapper[7484]: I0312 21:02:52.415117 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:53.415610 master-0 kubenswrapper[7484]: I0312 21:02:53.415500 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:53.415610 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:53.415610 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:53.415610 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:53.417028 master-0 kubenswrapper[7484]: I0312 21:02:53.415640 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:53.793720 master-0 kubenswrapper[7484]: E0312 21:02:53.793470 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:02:54.414665 master-0 kubenswrapper[7484]: I0312 21:02:54.414586 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:54.414665 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:54.414665 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:54.414665 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:54.415364 master-0 kubenswrapper[7484]: I0312 21:02:54.414693 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:55.414275 master-0 kubenswrapper[7484]: I0312 21:02:55.414181 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:55.414275 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:55.414275 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:55.414275 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:55.415481 master-0 kubenswrapper[7484]: I0312 21:02:55.414287 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:55.602240 master-0 kubenswrapper[7484]: E0312 21:02:55.602037 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:55.602947 master-0 kubenswrapper[7484]: I0312 21:02:55.602625 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ea71fe537bf33cf42ac5188e76585186bcdbc69589a2a47aa52fa489a1cbc62e"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 21:02:55.602947 master-0 kubenswrapper[7484]: I0312 21:02:55.602855 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://ea71fe537bf33cf42ac5188e76585186bcdbc69589a2a47aa52fa489a1cbc62e" gracePeriod=30 Mar 12 21:02:56.180627 master-0 kubenswrapper[7484]: E0312 21:02:56.180377 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 12 21:02:56.415117 master-0 kubenswrapper[7484]: I0312 21:02:56.415000 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:56.415117 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:56.415117 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:56.415117 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:56.415117 master-0 kubenswrapper[7484]: I0312 21:02:56.415097 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:56.948000 master-0 kubenswrapper[7484]: I0312 21:02:56.947855 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/0.log" Mar 12 21:02:56.948903 master-0 kubenswrapper[7484]: I0312 21:02:56.948788 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"18b0e483f29f9ae5185114583fb98fd459d80b80cf11a98fadcf7de4b21274b6"} Mar 12 21:02:56.949503 master-0 kubenswrapper[7484]: I0312 21:02:56.949430 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:02:56.949503 master-0 kubenswrapper[7484]: I0312 21:02:56.949481 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:02:57.415565 master-0 kubenswrapper[7484]: I0312 21:02:57.415459 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:57.415565 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:57.415565 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:57.415565 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:57.416859 master-0 kubenswrapper[7484]: I0312 21:02:57.415579 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:58.414371 master-0 kubenswrapper[7484]: I0312 21:02:58.414287 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:58.414371 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:58.414371 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:58.414371 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:58.414900 master-0 kubenswrapper[7484]: I0312 21:02:58.414400 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:59.414555 master-0 kubenswrapper[7484]: I0312 21:02:59.414452 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:02:59.414555 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:02:59.414555 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:02:59.414555 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:02:59.415609 master-0 kubenswrapper[7484]: I0312 21:02:59.414557 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:02:59.771098 master-0 kubenswrapper[7484]: I0312 21:02:59.770903 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:02:59.771098 master-0 kubenswrapper[7484]: I0312 21:02:59.771000 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:03:00.414475 master-0 kubenswrapper[7484]: I0312 21:03:00.414391 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:00.414475 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:00.414475 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:00.414475 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:00.414475 master-0 kubenswrapper[7484]: I0312 21:03:00.414477 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:01.415198 master-0 kubenswrapper[7484]: I0312 21:03:01.415105 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:01.415198 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:01.415198 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:01.415198 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:01.416449 master-0 kubenswrapper[7484]: I0312 21:03:01.415213 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:02.415104 master-0 kubenswrapper[7484]: I0312 21:03:02.415043 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:02.415104 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:02.415104 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:02.415104 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:02.416242 master-0 kubenswrapper[7484]: I0312 21:03:02.416196 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:02.771569 master-0 kubenswrapper[7484]: I0312 21:03:02.771421 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:03:02.771995 master-0 kubenswrapper[7484]: I0312 21:03:02.771940 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:03.414697 master-0 kubenswrapper[7484]: I0312 21:03:03.414604 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:03.414697 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:03.414697 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:03.414697 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:03.415215 master-0 kubenswrapper[7484]: I0312 21:03:03.414707 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:03.794593 master-0 kubenswrapper[7484]: E0312 21:03:03.794354 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:04.414746 master-0 kubenswrapper[7484]: I0312 21:03:04.414641 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:04.414746 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:04.414746 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:04.414746 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:04.414746 master-0 kubenswrapper[7484]: I0312 21:03:04.414743 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:05.414122 master-0 kubenswrapper[7484]: I0312 21:03:05.414039 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:05.414122 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:05.414122 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:05.414122 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:05.415033 master-0 kubenswrapper[7484]: I0312 21:03:05.414127 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:06.414557 master-0 kubenswrapper[7484]: I0312 21:03:06.414464 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:06.414557 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:06.414557 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:06.414557 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:06.415581 master-0 kubenswrapper[7484]: I0312 21:03:06.414572 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:06.580761 master-0 kubenswrapper[7484]: E0312 21:03:06.580678 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 12 21:03:07.415058 master-0 kubenswrapper[7484]: I0312 21:03:07.414927 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:07.415058 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:07.415058 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:07.415058 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:07.415058 master-0 kubenswrapper[7484]: I0312 21:03:07.415045 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:08.415656 master-0 kubenswrapper[7484]: I0312 21:03:08.415551 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:08.415656 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:08.415656 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:08.415656 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:08.416667 master-0 kubenswrapper[7484]: I0312 21:03:08.415660 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:09.049605 master-0 kubenswrapper[7484]: I0312 21:03:09.049441 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/3.log" Mar 12 21:03:09.050289 master-0 kubenswrapper[7484]: I0312 21:03:09.050241 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/2.log" Mar 12 21:03:09.051078 master-0 kubenswrapper[7484]: I0312 21:03:09.051014 7484 generic.go:334] "Generic (PLEG): container finished" podID="2b71f537-1cc2-4645-8e50-23941635457c" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" exitCode=1 Mar 12 21:03:09.051183 master-0 kubenswrapper[7484]: I0312 21:03:09.051085 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerDied","Data":"7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d"} Mar 12 21:03:09.051183 master-0 kubenswrapper[7484]: I0312 21:03:09.051146 7484 scope.go:117] "RemoveContainer" containerID="2d9fbcbbc403da2c9b3c1deb75c0442531b4adcea162653fcf9df2ae550aae8d" Mar 12 21:03:09.052178 master-0 kubenswrapper[7484]: I0312 21:03:09.052119 7484 scope.go:117] "RemoveContainer" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" Mar 12 21:03:09.053215 master-0 kubenswrapper[7484]: E0312 21:03:09.053100 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:03:09.414420 master-0 kubenswrapper[7484]: I0312 21:03:09.414302 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:09.414420 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:09.414420 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:09.414420 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:09.414927 master-0 kubenswrapper[7484]: I0312 21:03:09.414480 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:10.062916 master-0 kubenswrapper[7484]: I0312 21:03:10.062804 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/3.log" Mar 12 21:03:10.414646 master-0 kubenswrapper[7484]: I0312 21:03:10.414536 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:10.414646 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:10.414646 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:10.414646 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:10.415201 master-0 kubenswrapper[7484]: I0312 21:03:10.414674 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:11.415227 master-0 kubenswrapper[7484]: I0312 21:03:11.415134 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:11.415227 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:11.415227 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:11.415227 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:11.416259 master-0 kubenswrapper[7484]: I0312 21:03:11.415231 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:12.414596 master-0 kubenswrapper[7484]: I0312 21:03:12.414485 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:12.414596 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:12.414596 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:12.414596 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:12.414596 master-0 kubenswrapper[7484]: I0312 21:03:12.414576 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:12.772198 master-0 kubenswrapper[7484]: I0312 21:03:12.771956 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:03:12.772198 master-0 kubenswrapper[7484]: I0312 21:03:12.772077 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:13.414856 master-0 kubenswrapper[7484]: I0312 21:03:13.414702 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:13.414856 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:13.414856 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:13.414856 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:13.415952 master-0 kubenswrapper[7484]: I0312 21:03:13.414856 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:13.795292 master-0 kubenswrapper[7484]: E0312 21:03:13.795132 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:14.414662 master-0 kubenswrapper[7484]: I0312 21:03:14.414557 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:14.414662 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:14.414662 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:14.414662 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:14.414662 master-0 kubenswrapper[7484]: I0312 21:03:14.414648 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:15.414111 master-0 kubenswrapper[7484]: I0312 21:03:15.414017 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:15.414111 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:15.414111 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:15.414111 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:15.415256 master-0 kubenswrapper[7484]: I0312 21:03:15.414110 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:16.414875 master-0 kubenswrapper[7484]: I0312 21:03:16.414707 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:16.414875 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:16.414875 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:16.414875 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:16.414875 master-0 kubenswrapper[7484]: I0312 21:03:16.414854 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:17.383501 master-0 kubenswrapper[7484]: E0312 21:03:17.383375 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 12 21:03:17.414747 master-0 kubenswrapper[7484]: I0312 21:03:17.414629 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:17.414747 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:17.414747 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:17.414747 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:17.415424 master-0 kubenswrapper[7484]: I0312 21:03:17.414748 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:18.414423 master-0 kubenswrapper[7484]: I0312 21:03:18.414310 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:18.414423 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:18.414423 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:18.414423 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:18.415600 master-0 kubenswrapper[7484]: I0312 21:03:18.414425 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:19.414903 master-0 kubenswrapper[7484]: I0312 21:03:19.414796 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:19.414903 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:19.414903 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:19.414903 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:19.415450 master-0 kubenswrapper[7484]: I0312 21:03:19.414928 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:20.280743 master-0 kubenswrapper[7484]: E0312 21:03:20.280542 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c33d4361e2265 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7678a2e61b792fe3be55b1c6f67b2aa2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:02:19.811160677 +0000 UTC m=+752.296429509,LastTimestamp:2026-03-12 21:02:19.811160677 +0000 UTC m=+752.296429509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:03:20.414307 master-0 kubenswrapper[7484]: I0312 21:03:20.414198 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:20.414307 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:20.414307 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:20.414307 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:20.414727 master-0 kubenswrapper[7484]: I0312 21:03:20.414334 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:21.415027 master-0 kubenswrapper[7484]: I0312 21:03:21.414904 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:21.415027 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:21.415027 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:21.415027 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:21.416250 master-0 kubenswrapper[7484]: I0312 21:03:21.415027 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:21.734112 master-0 kubenswrapper[7484]: I0312 21:03:21.733945 7484 scope.go:117] "RemoveContainer" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" Mar 12 21:03:21.734416 master-0 kubenswrapper[7484]: E0312 21:03:21.734356 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:03:22.415101 master-0 kubenswrapper[7484]: I0312 21:03:22.415007 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:22.415101 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:22.415101 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:22.415101 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:22.416080 master-0 kubenswrapper[7484]: I0312 21:03:22.415132 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:22.772221 master-0 kubenswrapper[7484]: I0312 21:03:22.772074 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:03:22.772576 master-0 kubenswrapper[7484]: I0312 21:03:22.772531 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:22.772771 master-0 kubenswrapper[7484]: I0312 21:03:22.772745 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:03:23.415209 master-0 kubenswrapper[7484]: I0312 21:03:23.415124 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:23.415209 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:23.415209 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:23.415209 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:23.416079 master-0 kubenswrapper[7484]: I0312 21:03:23.415210 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:23.796107 master-0 kubenswrapper[7484]: E0312 21:03:23.795918 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:23.796107 master-0 kubenswrapper[7484]: E0312 21:03:23.795989 7484 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 21:03:23.860055 master-0 kubenswrapper[7484]: E0312 21:03:23.859965 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 21:03:24.201579 master-0 kubenswrapper[7484]: I0312 21:03:24.200967 7484 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="e15e3282e5b40a84b8a52ea1ba64dbbfb71a2f40822a028fb5e47eb69a3af82b" exitCode=0 Mar 12 21:03:24.201579 master-0 kubenswrapper[7484]: I0312 21:03:24.201013 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"e15e3282e5b40a84b8a52ea1ba64dbbfb71a2f40822a028fb5e47eb69a3af82b"} Mar 12 21:03:24.201579 master-0 kubenswrapper[7484]: I0312 21:03:24.201307 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:03:24.201579 master-0 kubenswrapper[7484]: I0312 21:03:24.201322 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:03:24.414696 master-0 kubenswrapper[7484]: I0312 21:03:24.414531 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:24.414696 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:24.414696 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:24.414696 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:24.414696 master-0 kubenswrapper[7484]: I0312 21:03:24.414617 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:25.414455 master-0 kubenswrapper[7484]: I0312 21:03:25.414368 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:25.414455 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:25.414455 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:25.414455 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:25.415130 master-0 kubenswrapper[7484]: I0312 21:03:25.414467 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:26.414729 master-0 kubenswrapper[7484]: I0312 21:03:26.414632 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:26.414729 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:26.414729 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:26.414729 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:26.414729 master-0 kubenswrapper[7484]: I0312 21:03:26.414702 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:27.228244 master-0 kubenswrapper[7484]: I0312 21:03:27.228139 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/1.log" Mar 12 21:03:27.230251 master-0 kubenswrapper[7484]: I0312 21:03:27.230187 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/0.log" Mar 12 21:03:27.230872 master-0 kubenswrapper[7484]: I0312 21:03:27.230784 7484 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="18b0e483f29f9ae5185114583fb98fd459d80b80cf11a98fadcf7de4b21274b6" exitCode=255 Mar 12 21:03:27.230987 master-0 kubenswrapper[7484]: I0312 21:03:27.230860 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"18b0e483f29f9ae5185114583fb98fd459d80b80cf11a98fadcf7de4b21274b6"} Mar 12 21:03:27.230987 master-0 kubenswrapper[7484]: I0312 21:03:27.230937 7484 scope.go:117] "RemoveContainer" containerID="ea71fe537bf33cf42ac5188e76585186bcdbc69589a2a47aa52fa489a1cbc62e" Mar 12 21:03:27.414344 master-0 kubenswrapper[7484]: I0312 21:03:27.414236 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:27.414344 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:27.414344 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:27.414344 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:27.414344 master-0 kubenswrapper[7484]: I0312 21:03:27.414339 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:28.244359 master-0 kubenswrapper[7484]: I0312 21:03:28.244271 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/1.log" Mar 12 21:03:28.415122 master-0 kubenswrapper[7484]: I0312 21:03:28.415045 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:28.415122 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:28.415122 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:28.415122 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:28.416118 master-0 kubenswrapper[7484]: I0312 21:03:28.415144 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:28.985167 master-0 kubenswrapper[7484]: E0312 21:03:28.985062 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 12 21:03:29.414510 master-0 kubenswrapper[7484]: I0312 21:03:29.414455 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:29.414510 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:29.414510 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:29.414510 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:29.415098 master-0 kubenswrapper[7484]: I0312 21:03:29.415055 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:30.413689 master-0 kubenswrapper[7484]: I0312 21:03:30.413601 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:30.413689 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:30.413689 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:30.413689 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:30.413689 master-0 kubenswrapper[7484]: I0312 21:03:30.413692 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:30.954118 master-0 kubenswrapper[7484]: E0312 21:03:30.954037 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:03:30.954574 master-0 kubenswrapper[7484]: I0312 21:03:30.954513 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"18b0e483f29f9ae5185114583fb98fd459d80b80cf11a98fadcf7de4b21274b6"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 21:03:30.954715 master-0 kubenswrapper[7484]: I0312 21:03:30.954678 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://18b0e483f29f9ae5185114583fb98fd459d80b80cf11a98fadcf7de4b21274b6" gracePeriod=30 Mar 12 21:03:31.285235 master-0 kubenswrapper[7484]: I0312 21:03:31.285183 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/1.log" Mar 12 21:03:31.290873 master-0 kubenswrapper[7484]: I0312 21:03:31.290832 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/1.log" Mar 12 21:03:31.292401 master-0 kubenswrapper[7484]: I0312 21:03:31.292354 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/0.log" Mar 12 21:03:31.292488 master-0 kubenswrapper[7484]: I0312 21:03:31.292433 7484 generic.go:334] "Generic (PLEG): container finished" podID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerID="41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1" exitCode=1 Mar 12 21:03:31.292488 master-0 kubenswrapper[7484]: I0312 21:03:31.292476 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerDied","Data":"41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1"} Mar 12 21:03:31.292593 master-0 kubenswrapper[7484]: I0312 21:03:31.292525 7484 scope.go:117] "RemoveContainer" containerID="60173c0f9984162f24ad65c25f3ae119353e5fb646ea28da5079828f5c237197" Mar 12 21:03:31.293356 master-0 kubenswrapper[7484]: I0312 21:03:31.293319 7484 scope.go:117] "RemoveContainer" containerID="41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1" Mar 12 21:03:31.293964 master-0 kubenswrapper[7484]: E0312 21:03:31.293858 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-hdd4n_openshift-operator-controller(8b96dd10-18a0-49f8-b488-63fc2b23da39)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" Mar 12 21:03:31.415145 master-0 kubenswrapper[7484]: I0312 21:03:31.415066 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:31.415145 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:31.415145 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:31.415145 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:31.416165 master-0 kubenswrapper[7484]: I0312 21:03:31.415174 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:32.303198 master-0 kubenswrapper[7484]: I0312 21:03:32.303111 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/1.log" Mar 12 21:03:32.308085 master-0 kubenswrapper[7484]: I0312 21:03:32.308033 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/1.log" Mar 12 21:03:32.309800 master-0 kubenswrapper[7484]: I0312 21:03:32.309749 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"f02c840b81a2d77bba25062b33d2959df737d0e9c53abeca566ed78c88468261"} Mar 12 21:03:32.310341 master-0 kubenswrapper[7484]: I0312 21:03:32.310271 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:03:32.310341 master-0 kubenswrapper[7484]: I0312 21:03:32.310327 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:03:32.413951 master-0 kubenswrapper[7484]: I0312 21:03:32.413861 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:32.413951 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:32.413951 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:32.413951 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:32.414344 master-0 kubenswrapper[7484]: I0312 21:03:32.413977 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:32.734179 master-0 kubenswrapper[7484]: I0312 21:03:32.734086 7484 scope.go:117] "RemoveContainer" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" Mar 12 21:03:32.735067 master-0 kubenswrapper[7484]: E0312 21:03:32.734541 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:03:33.318920 master-0 kubenswrapper[7484]: I0312 21:03:33.318820 7484 generic.go:334] "Generic (PLEG): container finished" podID="e624e623-6d59-444d-b548-165fa5fd2581" containerID="39d3c428744e31947d0aba2cc71c1c8335e2ced3049d8e6b24468cee1c398ffb" exitCode=0 Mar 12 21:03:33.318920 master-0 kubenswrapper[7484]: I0312 21:03:33.318884 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" event={"ID":"e624e623-6d59-444d-b548-165fa5fd2581","Type":"ContainerDied","Data":"39d3c428744e31947d0aba2cc71c1c8335e2ced3049d8e6b24468cee1c398ffb"} Mar 12 21:03:33.319263 master-0 kubenswrapper[7484]: I0312 21:03:33.318971 7484 scope.go:117] "RemoveContainer" containerID="2d7932f9200cfcc46a818b87f2e758dc323d7be1734436d6a1a8927b3aea1adf" Mar 12 21:03:33.319691 master-0 kubenswrapper[7484]: I0312 21:03:33.319635 7484 scope.go:117] "RemoveContainer" containerID="39d3c428744e31947d0aba2cc71c1c8335e2ced3049d8e6b24468cee1c398ffb" Mar 12 21:03:33.414323 master-0 kubenswrapper[7484]: I0312 21:03:33.414228 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:33.414323 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:33.414323 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:33.414323 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:33.414732 master-0 kubenswrapper[7484]: I0312 21:03:33.414329 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:34.331623 master-0 kubenswrapper[7484]: I0312 21:03:34.331536 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" event={"ID":"e624e623-6d59-444d-b548-165fa5fd2581","Type":"ContainerStarted","Data":"4f07bdbb202e8d7ec35c2942fcef53594fd886965e646621487300c1a7296997"} Mar 12 21:03:34.334463 master-0 kubenswrapper[7484]: I0312 21:03:34.332011 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:03:34.334463 master-0 kubenswrapper[7484]: I0312 21:03:34.333902 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:03:34.414699 master-0 kubenswrapper[7484]: I0312 21:03:34.414589 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:34.414699 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:34.414699 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:34.414699 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:34.414699 master-0 kubenswrapper[7484]: I0312 21:03:34.414647 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:35.415246 master-0 kubenswrapper[7484]: I0312 21:03:35.415151 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:35.415246 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:35.415246 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:35.415246 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:35.416205 master-0 kubenswrapper[7484]: I0312 21:03:35.415273 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:36.415356 master-0 kubenswrapper[7484]: I0312 21:03:36.415252 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:36.415356 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:36.415356 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:36.415356 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:36.416336 master-0 kubenswrapper[7484]: I0312 21:03:36.415379 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:36.642130 master-0 kubenswrapper[7484]: I0312 21:03:36.642056 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:03:36.642538 master-0 kubenswrapper[7484]: I0312 21:03:36.642506 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:03:36.643541 master-0 kubenswrapper[7484]: I0312 21:03:36.643471 7484 scope.go:117] "RemoveContainer" containerID="41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1" Mar 12 21:03:36.644139 master-0 kubenswrapper[7484]: E0312 21:03:36.644087 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-hdd4n_openshift-operator-controller(8b96dd10-18a0-49f8-b488-63fc2b23da39)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" Mar 12 21:03:37.354377 master-0 kubenswrapper[7484]: I0312 21:03:37.354281 7484 scope.go:117] "RemoveContainer" containerID="41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1" Mar 12 21:03:37.354700 master-0 kubenswrapper[7484]: E0312 21:03:37.354667 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-6598bfb6c4-hdd4n_openshift-operator-controller(8b96dd10-18a0-49f8-b488-63fc2b23da39)\"" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" podUID="8b96dd10-18a0-49f8-b488-63fc2b23da39" Mar 12 21:03:37.413906 master-0 kubenswrapper[7484]: I0312 21:03:37.413792 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:37.413906 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:37.413906 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:37.413906 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:37.414257 master-0 kubenswrapper[7484]: I0312 21:03:37.413918 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:38.365303 master-0 kubenswrapper[7484]: I0312 21:03:38.365212 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/config-sync-controllers/0.log" Mar 12 21:03:38.366280 master-0 kubenswrapper[7484]: I0312 21:03:38.366001 7484 generic.go:334] "Generic (PLEG): container finished" podID="f8467055-c9c9-4485-bb60-9a79e8b91268" containerID="18344b8e4a33f4c35bb70a4b908fe016ad02097c53ac346b4a920c21a96bb7bc" exitCode=1 Mar 12 21:03:38.366280 master-0 kubenswrapper[7484]: I0312 21:03:38.366063 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerDied","Data":"18344b8e4a33f4c35bb70a4b908fe016ad02097c53ac346b4a920c21a96bb7bc"} Mar 12 21:03:38.367076 master-0 kubenswrapper[7484]: I0312 21:03:38.367018 7484 scope.go:117] "RemoveContainer" containerID="18344b8e4a33f4c35bb70a4b908fe016ad02097c53ac346b4a920c21a96bb7bc" Mar 12 21:03:38.415590 master-0 kubenswrapper[7484]: I0312 21:03:38.415468 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:38.415590 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:38.415590 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:38.415590 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:38.416016 master-0 kubenswrapper[7484]: I0312 21:03:38.415640 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:39.376537 master-0 kubenswrapper[7484]: I0312 21:03:39.376424 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/config-sync-controllers/0.log" Mar 12 21:03:39.377513 master-0 kubenswrapper[7484]: I0312 21:03:39.376924 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerStarted","Data":"b7a608f68705ff14ec613ecd704f3acfb450fa2003b96288df6c11b17c770035"} Mar 12 21:03:39.413799 master-0 kubenswrapper[7484]: I0312 21:03:39.413728 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:39.413799 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:39.413799 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:39.413799 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:39.413799 master-0 kubenswrapper[7484]: I0312 21:03:39.413781 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:39.773057 master-0 kubenswrapper[7484]: I0312 21:03:39.772870 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:03:39.773057 master-0 kubenswrapper[7484]: I0312 21:03:39.772930 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:03:40.414865 master-0 kubenswrapper[7484]: I0312 21:03:40.414716 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:40.414865 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:40.414865 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:40.414865 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:40.416523 master-0 kubenswrapper[7484]: I0312 21:03:40.416114 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:41.415282 master-0 kubenswrapper[7484]: I0312 21:03:41.415177 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:41.415282 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:41.415282 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:41.415282 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:41.415282 master-0 kubenswrapper[7484]: I0312 21:03:41.415279 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:42.187035 master-0 kubenswrapper[7484]: E0312 21:03:42.186922 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 12 21:03:42.406802 master-0 kubenswrapper[7484]: I0312 21:03:42.406725 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/config-sync-controllers/0.log" Mar 12 21:03:42.408234 master-0 kubenswrapper[7484]: I0312 21:03:42.408160 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/cluster-cloud-controller-manager/0.log" Mar 12 21:03:42.408402 master-0 kubenswrapper[7484]: I0312 21:03:42.408274 7484 generic.go:334] "Generic (PLEG): container finished" podID="f8467055-c9c9-4485-bb60-9a79e8b91268" containerID="35a48c44f0a4c7fdef814d1fdd69f5e797632637da5b33039378ae2cc0e1e688" exitCode=1 Mar 12 21:03:42.408402 master-0 kubenswrapper[7484]: I0312 21:03:42.408337 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerDied","Data":"35a48c44f0a4c7fdef814d1fdd69f5e797632637da5b33039378ae2cc0e1e688"} Mar 12 21:03:42.409091 master-0 kubenswrapper[7484]: I0312 21:03:42.409040 7484 scope.go:117] "RemoveContainer" containerID="35a48c44f0a4c7fdef814d1fdd69f5e797632637da5b33039378ae2cc0e1e688" Mar 12 21:03:42.415419 master-0 kubenswrapper[7484]: I0312 21:03:42.415342 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:42.415419 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:42.415419 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:42.415419 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:42.416305 master-0 kubenswrapper[7484]: I0312 21:03:42.415466 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:42.772784 master-0 kubenswrapper[7484]: I0312 21:03:42.772712 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:03:42.773035 master-0 kubenswrapper[7484]: I0312 21:03:42.772827 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:43.414598 master-0 kubenswrapper[7484]: I0312 21:03:43.414539 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:43.414598 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:43.414598 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:43.414598 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:43.415216 master-0 kubenswrapper[7484]: I0312 21:03:43.414662 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:43.422322 master-0 kubenswrapper[7484]: I0312 21:03:43.422261 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/1.log" Mar 12 21:03:43.424169 master-0 kubenswrapper[7484]: I0312 21:03:43.424119 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/0.log" Mar 12 21:03:43.424743 master-0 kubenswrapper[7484]: I0312 21:03:43.424690 7484 generic.go:334] "Generic (PLEG): container finished" podID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerID="56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f" exitCode=1 Mar 12 21:03:43.424906 master-0 kubenswrapper[7484]: I0312 21:03:43.424775 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerDied","Data":"56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f"} Mar 12 21:03:43.424906 master-0 kubenswrapper[7484]: I0312 21:03:43.424889 7484 scope.go:117] "RemoveContainer" containerID="5932e7f75755d53b1d311f0b9e66cf21d66d861e9615083a39ac924565528bfd" Mar 12 21:03:43.426043 master-0 kubenswrapper[7484]: I0312 21:03:43.425975 7484 scope.go:117] "RemoveContainer" containerID="56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f" Mar 12 21:03:43.426373 master-0 kubenswrapper[7484]: E0312 21:03:43.426322 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-zgjqw_openshift-catalogd(cf33c432-db42-4c6d-8ee4-f089e5bf8203)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" Mar 12 21:03:43.431340 master-0 kubenswrapper[7484]: I0312 21:03:43.431280 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/config-sync-controllers/0.log" Mar 12 21:03:43.432183 master-0 kubenswrapper[7484]: I0312 21:03:43.432120 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/cluster-cloud-controller-manager/0.log" Mar 12 21:03:43.433056 master-0 kubenswrapper[7484]: I0312 21:03:43.432994 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" event={"ID":"f8467055-c9c9-4485-bb60-9a79e8b91268","Type":"ContainerStarted","Data":"afc1c06340a517cfdfa3cd92440f176504fcc20e82c9d0f902ff01425ea4203b"} Mar 12 21:03:44.414314 master-0 kubenswrapper[7484]: I0312 21:03:44.414216 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:44.414314 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:44.414314 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:44.414314 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:44.414745 master-0 kubenswrapper[7484]: I0312 21:03:44.414314 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:44.442886 master-0 kubenswrapper[7484]: I0312 21:03:44.442802 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/1.log" Mar 12 21:03:45.415216 master-0 kubenswrapper[7484]: I0312 21:03:45.415097 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:45.415216 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:45.415216 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:45.415216 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:45.415878 master-0 kubenswrapper[7484]: I0312 21:03:45.415276 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:45.454331 master-0 kubenswrapper[7484]: I0312 21:03:45.454251 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/2.log" Mar 12 21:03:45.455153 master-0 kubenswrapper[7484]: I0312 21:03:45.455074 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/1.log" Mar 12 21:03:45.455242 master-0 kubenswrapper[7484]: I0312 21:03:45.455150 7484 generic.go:334] "Generic (PLEG): container finished" podID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" containerID="a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1" exitCode=1 Mar 12 21:03:45.455242 master-0 kubenswrapper[7484]: I0312 21:03:45.455204 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerDied","Data":"a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1"} Mar 12 21:03:45.455384 master-0 kubenswrapper[7484]: I0312 21:03:45.455264 7484 scope.go:117] "RemoveContainer" containerID="0bd6a0b7ed84e5c57f80585b12035a2addd846361d63e97d5c4b6e34bb41dd20" Mar 12 21:03:45.456167 master-0 kubenswrapper[7484]: I0312 21:03:45.456104 7484 scope.go:117] "RemoveContainer" containerID="a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1" Mar 12 21:03:45.457271 master-0 kubenswrapper[7484]: E0312 21:03:45.457212 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:03:46.415260 master-0 kubenswrapper[7484]: I0312 21:03:46.415170 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:46.415260 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:46.415260 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:46.415260 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:46.415712 master-0 kubenswrapper[7484]: I0312 21:03:46.415265 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:46.466042 master-0 kubenswrapper[7484]: I0312 21:03:46.465959 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/2.log" Mar 12 21:03:46.579641 master-0 kubenswrapper[7484]: I0312 21:03:46.579533 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:03:46.580580 master-0 kubenswrapper[7484]: I0312 21:03:46.580512 7484 scope.go:117] "RemoveContainer" containerID="56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f" Mar 12 21:03:46.581022 master-0 kubenswrapper[7484]: E0312 21:03:46.580959 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-7f8b8b6f4c-zgjqw_openshift-catalogd(cf33c432-db42-4c6d-8ee4-f089e5bf8203)\"" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" podUID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" Mar 12 21:03:47.415648 master-0 kubenswrapper[7484]: I0312 21:03:47.415540 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:47.415648 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:47.415648 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:47.415648 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:47.416361 master-0 kubenswrapper[7484]: I0312 21:03:47.415641 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:47.734218 master-0 kubenswrapper[7484]: I0312 21:03:47.734090 7484 scope.go:117] "RemoveContainer" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" Mar 12 21:03:47.734218 master-0 kubenswrapper[7484]: I0312 21:03:47.734157 7484 scope.go:117] "RemoveContainer" containerID="41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1" Mar 12 21:03:47.734848 master-0 kubenswrapper[7484]: E0312 21:03:47.734566 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:03:47.752547 master-0 kubenswrapper[7484]: I0312 21:03:47.752477 7484 status_manager.go:851] "Failed to get status for pod" podUID="1453f6461bf5d599ad65a4656343ee91" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-0)" Mar 12 21:03:48.413841 master-0 kubenswrapper[7484]: I0312 21:03:48.413752 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:48.413841 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:48.413841 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:48.413841 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:48.414409 master-0 kubenswrapper[7484]: I0312 21:03:48.414352 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:48.487354 master-0 kubenswrapper[7484]: I0312 21:03:48.487217 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/1.log" Mar 12 21:03:48.488158 master-0 kubenswrapper[7484]: I0312 21:03:48.488076 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" event={"ID":"8b96dd10-18a0-49f8-b488-63fc2b23da39","Type":"ContainerStarted","Data":"71f1019cd618755c37057e690491eb2fd9f2ee6f8050c0d2cd910aee2c92766c"} Mar 12 21:03:48.488479 master-0 kubenswrapper[7484]: I0312 21:03:48.488413 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:03:49.414048 master-0 kubenswrapper[7484]: I0312 21:03:49.413952 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:49.414048 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:49.414048 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:49.414048 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:49.415191 master-0 kubenswrapper[7484]: I0312 21:03:49.414045 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:50.414903 master-0 kubenswrapper[7484]: I0312 21:03:50.414836 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:50.414903 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:50.414903 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:50.414903 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:50.416078 master-0 kubenswrapper[7484]: I0312 21:03:50.416032 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:51.414595 master-0 kubenswrapper[7484]: I0312 21:03:51.414506 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:51.414595 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:51.414595 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:51.414595 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:51.415999 master-0 kubenswrapper[7484]: I0312 21:03:51.414615 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:52.415444 master-0 kubenswrapper[7484]: I0312 21:03:52.415345 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:52.415444 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:52.415444 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:52.415444 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:52.415444 master-0 kubenswrapper[7484]: I0312 21:03:52.415449 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:52.772424 master-0 kubenswrapper[7484]: I0312 21:03:52.772185 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:03:52.772424 master-0 kubenswrapper[7484]: I0312 21:03:52.772313 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:03:53.414601 master-0 kubenswrapper[7484]: I0312 21:03:53.414549 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:53.414601 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:53.414601 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:53.414601 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:53.414975 master-0 kubenswrapper[7484]: I0312 21:03:53.414612 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:54.284269 master-0 kubenswrapper[7484]: E0312 21:03:54.284066 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c33d44697e669 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7678a2e61b792fe3be55b1c6f67b2aa2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:02:20.087576169 +0000 UTC m=+752.572845011,LastTimestamp:2026-03-12 21:02:20.087576169 +0000 UTC m=+752.572845011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:03:54.414504 master-0 kubenswrapper[7484]: I0312 21:03:54.414411 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:54.414504 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:54.414504 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:54.414504 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:54.414835 master-0 kubenswrapper[7484]: I0312 21:03:54.414508 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:55.414700 master-0 kubenswrapper[7484]: I0312 21:03:55.414619 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:55.414700 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:55.414700 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:55.414700 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:55.415718 master-0 kubenswrapper[7484]: I0312 21:03:55.414713 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:55.734049 master-0 kubenswrapper[7484]: I0312 21:03:55.733877 7484 scope.go:117] "RemoveContainer" containerID="a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1" Mar 12 21:03:55.734298 master-0 kubenswrapper[7484]: E0312 21:03:55.734231 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:03:56.414468 master-0 kubenswrapper[7484]: I0312 21:03:56.414379 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:56.414468 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:56.414468 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:56.414468 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:56.415510 master-0 kubenswrapper[7484]: I0312 21:03:56.414502 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:56.608194 master-0 kubenswrapper[7484]: I0312 21:03:56.608065 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:03:56.609275 master-0 kubenswrapper[7484]: I0312 21:03:56.609214 7484 scope.go:117] "RemoveContainer" containerID="56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f" Mar 12 21:03:56.644876 master-0 kubenswrapper[7484]: I0312 21:03:56.644795 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:03:57.414720 master-0 kubenswrapper[7484]: I0312 21:03:57.414642 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:57.414720 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:57.414720 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:57.414720 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:57.415842 master-0 kubenswrapper[7484]: I0312 21:03:57.414741 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:57.559940 master-0 kubenswrapper[7484]: I0312 21:03:57.559864 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/1.log" Mar 12 21:03:57.560455 master-0 kubenswrapper[7484]: I0312 21:03:57.560415 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" event={"ID":"cf33c432-db42-4c6d-8ee4-f089e5bf8203","Type":"ContainerStarted","Data":"951399270a6716cbfa54411e17a5691a6896ee790032052a30916903d1cce342"} Mar 12 21:03:57.560941 master-0 kubenswrapper[7484]: I0312 21:03:57.560872 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:03:58.205755 master-0 kubenswrapper[7484]: E0312 21:03:58.205632 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 21:03:58.416417 master-0 kubenswrapper[7484]: I0312 21:03:58.416342 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:58.416417 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:58.416417 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:58.416417 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:58.417106 master-0 kubenswrapper[7484]: I0312 21:03:58.416486 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:58.588156 master-0 kubenswrapper[7484]: E0312 21:03:58.587998 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:03:58.733719 master-0 kubenswrapper[7484]: I0312 21:03:58.733587 7484 scope.go:117] "RemoveContainer" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" Mar 12 21:03:59.414094 master-0 kubenswrapper[7484]: I0312 21:03:59.414040 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:03:59.414094 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:03:59.414094 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:03:59.414094 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:03:59.414892 master-0 kubenswrapper[7484]: I0312 21:03:59.414140 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:03:59.585496 master-0 kubenswrapper[7484]: I0312 21:03:59.585439 7484 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="9d4f8c64eddb4e3b0d519c870ca47049e39126a8c78d8b9d4e92971fdcedf0ce" exitCode=0 Mar 12 21:03:59.586446 master-0 kubenswrapper[7484]: I0312 21:03:59.585562 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"9d4f8c64eddb4e3b0d519c870ca47049e39126a8c78d8b9d4e92971fdcedf0ce"} Mar 12 21:03:59.586597 master-0 kubenswrapper[7484]: I0312 21:03:59.585885 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:03:59.586734 master-0 kubenswrapper[7484]: I0312 21:03:59.586709 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:03:59.589944 master-0 kubenswrapper[7484]: I0312 21:03:59.589915 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/3.log" Mar 12 21:03:59.590837 master-0 kubenswrapper[7484]: I0312 21:03:59.590751 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960"} Mar 12 21:04:00.415005 master-0 kubenswrapper[7484]: I0312 21:04:00.414923 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:00.415005 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:00.415005 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:00.415005 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:00.415428 master-0 kubenswrapper[7484]: I0312 21:04:00.415004 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:01.415332 master-0 kubenswrapper[7484]: I0312 21:04:01.415256 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:01.415332 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:01.415332 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:01.415332 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:01.416482 master-0 kubenswrapper[7484]: I0312 21:04:01.415349 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:01.782535 master-0 kubenswrapper[7484]: I0312 21:04:01.781661 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:54306->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 12 21:04:01.782535 master-0 kubenswrapper[7484]: I0312 21:04:01.781781 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:54306->127.0.0.1:10357: read: connection reset by peer" Mar 12 21:04:01.782535 master-0 kubenswrapper[7484]: I0312 21:04:01.781906 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:02.414870 master-0 kubenswrapper[7484]: I0312 21:04:02.414724 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:02.414870 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:02.414870 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:02.414870 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:02.415321 master-0 kubenswrapper[7484]: I0312 21:04:02.414869 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:02.614763 master-0 kubenswrapper[7484]: I0312 21:04:02.614716 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/2.log" Mar 12 21:04:02.616281 master-0 kubenswrapper[7484]: I0312 21:04:02.616226 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/1.log" Mar 12 21:04:02.617793 master-0 kubenswrapper[7484]: I0312 21:04:02.617731 7484 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="f02c840b81a2d77bba25062b33d2959df737d0e9c53abeca566ed78c88468261" exitCode=255 Mar 12 21:04:02.617948 master-0 kubenswrapper[7484]: I0312 21:04:02.617834 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"f02c840b81a2d77bba25062b33d2959df737d0e9c53abeca566ed78c88468261"} Mar 12 21:04:02.617948 master-0 kubenswrapper[7484]: I0312 21:04:02.617899 7484 scope.go:117] "RemoveContainer" containerID="18b0e483f29f9ae5185114583fb98fd459d80b80cf11a98fadcf7de4b21274b6" Mar 12 21:04:03.414438 master-0 kubenswrapper[7484]: I0312 21:04:03.414352 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:03.414438 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:03.414438 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:03.414438 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:03.414962 master-0 kubenswrapper[7484]: I0312 21:04:03.414454 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:03.628454 master-0 kubenswrapper[7484]: I0312 21:04:03.628360 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-hj9bb_400a13b5-c489-4beb-af33-94e635b86148/machine-approver-controller/0.log" Mar 12 21:04:03.629370 master-0 kubenswrapper[7484]: I0312 21:04:03.628841 7484 generic.go:334] "Generic (PLEG): container finished" podID="400a13b5-c489-4beb-af33-94e635b86148" containerID="0a5780f6022da4e29888a4248f2002849d195cb3f0bde73988863a5f5ecbe533" exitCode=255 Mar 12 21:04:03.629370 master-0 kubenswrapper[7484]: I0312 21:04:03.628923 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" event={"ID":"400a13b5-c489-4beb-af33-94e635b86148","Type":"ContainerDied","Data":"0a5780f6022da4e29888a4248f2002849d195cb3f0bde73988863a5f5ecbe533"} Mar 12 21:04:03.629516 master-0 kubenswrapper[7484]: I0312 21:04:03.629504 7484 scope.go:117] "RemoveContainer" containerID="0a5780f6022da4e29888a4248f2002849d195cb3f0bde73988863a5f5ecbe533" Mar 12 21:04:03.634122 master-0 kubenswrapper[7484]: I0312 21:04:03.634075 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/2.log" Mar 12 21:04:04.414530 master-0 kubenswrapper[7484]: I0312 21:04:04.414383 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:04.414530 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:04.414530 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:04.414530 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:04.415006 master-0 kubenswrapper[7484]: I0312 21:04:04.414578 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:04.522158 master-0 kubenswrapper[7484]: E0312 21:04:04.522042 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:03:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:03:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:03:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:03:54Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:04.649414 master-0 kubenswrapper[7484]: I0312 21:04:04.649327 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-hj9bb_400a13b5-c489-4beb-af33-94e635b86148/machine-approver-controller/0.log" Mar 12 21:04:04.650370 master-0 kubenswrapper[7484]: I0312 21:04:04.650098 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" event={"ID":"400a13b5-c489-4beb-af33-94e635b86148","Type":"ContainerStarted","Data":"5cbae70f130fbf45e2097ba894be300c28b444c397ee6ffdcab00fd61bf8395b"} Mar 12 21:04:05.415001 master-0 kubenswrapper[7484]: I0312 21:04:05.414882 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:05.415001 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:05.415001 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:05.415001 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:05.415336 master-0 kubenswrapper[7484]: I0312 21:04:05.415050 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:06.313228 master-0 kubenswrapper[7484]: E0312 21:04:06.313095 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:06.314093 master-0 kubenswrapper[7484]: I0312 21:04:06.313659 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f02c840b81a2d77bba25062b33d2959df737d0e9c53abeca566ed78c88468261"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 21:04:06.314093 master-0 kubenswrapper[7484]: I0312 21:04:06.313915 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://f02c840b81a2d77bba25062b33d2959df737d0e9c53abeca566ed78c88468261" gracePeriod=30 Mar 12 21:04:06.415770 master-0 kubenswrapper[7484]: I0312 21:04:06.415626 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:06.415770 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:06.415770 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:06.415770 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:06.415770 master-0 kubenswrapper[7484]: I0312 21:04:06.415744 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:06.583407 master-0 kubenswrapper[7484]: I0312 21:04:06.583329 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:04:06.669890 master-0 kubenswrapper[7484]: I0312 21:04:06.669791 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/2.log" Mar 12 21:04:06.671584 master-0 kubenswrapper[7484]: I0312 21:04:06.671443 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"a8ece6a63e869b9a30f6f436409dac82b1a1fa49731dbcfd8d7578397d7622b2"} Mar 12 21:04:06.671957 master-0 kubenswrapper[7484]: I0312 21:04:06.671906 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:04:06.671957 master-0 kubenswrapper[7484]: I0312 21:04:06.671950 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:04:07.414681 master-0 kubenswrapper[7484]: I0312 21:04:07.414586 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:07.414681 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:07.414681 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:07.414681 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:07.416029 master-0 kubenswrapper[7484]: I0312 21:04:07.414681 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:08.414576 master-0 kubenswrapper[7484]: I0312 21:04:08.414455 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:08.414576 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:08.414576 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:08.414576 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:08.415596 master-0 kubenswrapper[7484]: I0312 21:04:08.414589 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:09.414050 master-0 kubenswrapper[7484]: I0312 21:04:09.413980 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:09.414050 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:09.414050 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:09.414050 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:09.414612 master-0 kubenswrapper[7484]: I0312 21:04:09.414065 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:09.771579 master-0 kubenswrapper[7484]: I0312 21:04:09.771436 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:09.772423 master-0 kubenswrapper[7484]: I0312 21:04:09.772387 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:10.415113 master-0 kubenswrapper[7484]: I0312 21:04:10.415034 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:10.415113 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:10.415113 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:10.415113 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:10.415113 master-0 kubenswrapper[7484]: I0312 21:04:10.415124 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:10.733507 master-0 kubenswrapper[7484]: I0312 21:04:10.733328 7484 scope.go:117] "RemoveContainer" containerID="a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1" Mar 12 21:04:11.415003 master-0 kubenswrapper[7484]: I0312 21:04:11.414925 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:11.415003 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:11.415003 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:11.415003 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:11.416613 master-0 kubenswrapper[7484]: I0312 21:04:11.415052 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:11.716168 master-0 kubenswrapper[7484]: I0312 21:04:11.716002 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/2.log" Mar 12 21:04:11.716168 master-0 kubenswrapper[7484]: I0312 21:04:11.716145 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1"} Mar 12 21:04:12.414589 master-0 kubenswrapper[7484]: I0312 21:04:12.414491 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:12.414589 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:12.414589 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:12.414589 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:12.415803 master-0 kubenswrapper[7484]: I0312 21:04:12.414690 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:12.729462 master-0 kubenswrapper[7484]: I0312 21:04:12.729271 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/0.log" Mar 12 21:04:12.729462 master-0 kubenswrapper[7484]: I0312 21:04:12.729367 7484 generic.go:334] "Generic (PLEG): container finished" podID="17d2bb40-74e2-4894-a884-7018952bdf71" containerID="6dc411727752ae888d72d927bcde06522ded330928aadabe0e4e42b673281367" exitCode=1 Mar 12 21:04:12.729462 master-0 kubenswrapper[7484]: I0312 21:04:12.729425 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerDied","Data":"6dc411727752ae888d72d927bcde06522ded330928aadabe0e4e42b673281367"} Mar 12 21:04:12.730299 master-0 kubenswrapper[7484]: I0312 21:04:12.730243 7484 scope.go:117] "RemoveContainer" containerID="6dc411727752ae888d72d927bcde06522ded330928aadabe0e4e42b673281367" Mar 12 21:04:12.772623 master-0 kubenswrapper[7484]: I0312 21:04:12.772540 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:04:12.772728 master-0 kubenswrapper[7484]: I0312 21:04:12.772662 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:13.413675 master-0 kubenswrapper[7484]: I0312 21:04:13.413578 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:13.413675 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:13.413675 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:13.413675 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:13.413675 master-0 kubenswrapper[7484]: I0312 21:04:13.413668 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:13.744859 master-0 kubenswrapper[7484]: I0312 21:04:13.744643 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/0.log" Mar 12 21:04:13.757266 master-0 kubenswrapper[7484]: I0312 21:04:13.757178 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerStarted","Data":"57afad4e3efc3237af416deb66bd4d026f0ff91e709bfe7cc68bb56bee784fe7"} Mar 12 21:04:14.416274 master-0 kubenswrapper[7484]: I0312 21:04:14.416156 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:14.416274 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:14.416274 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:14.416274 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:14.416274 master-0 kubenswrapper[7484]: I0312 21:04:14.416252 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:14.523166 master-0 kubenswrapper[7484]: E0312 21:04:14.523043 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:14.755242 master-0 kubenswrapper[7484]: I0312 21:04:14.755065 7484 generic.go:334] "Generic (PLEG): container finished" podID="b50a6106-1112-4a4b-b4ae-933879e12110" containerID="8dc00850a2298439a85382d76a3ffd123f490ec7c79324ad9a9c72fd9448c30b" exitCode=0 Mar 12 21:04:14.755242 master-0 kubenswrapper[7484]: I0312 21:04:14.755143 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" event={"ID":"b50a6106-1112-4a4b-b4ae-933879e12110","Type":"ContainerDied","Data":"8dc00850a2298439a85382d76a3ffd123f490ec7c79324ad9a9c72fd9448c30b"} Mar 12 21:04:14.756078 master-0 kubenswrapper[7484]: I0312 21:04:14.755663 7484 scope.go:117] "RemoveContainer" containerID="8dc00850a2298439a85382d76a3ffd123f490ec7c79324ad9a9c72fd9448c30b" Mar 12 21:04:15.414377 master-0 kubenswrapper[7484]: I0312 21:04:15.414274 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:15.414377 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:15.414377 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:15.414377 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:15.414896 master-0 kubenswrapper[7484]: I0312 21:04:15.414431 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:15.588619 master-0 kubenswrapper[7484]: E0312 21:04:15.588512 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 12 21:04:15.766468 master-0 kubenswrapper[7484]: I0312 21:04:15.766271 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" event={"ID":"b50a6106-1112-4a4b-b4ae-933879e12110","Type":"ContainerStarted","Data":"03d26921cb309140d5aa931f200e060cdbfc92a85420edf8e1d33e12c678c87b"} Mar 12 21:04:15.767441 master-0 kubenswrapper[7484]: I0312 21:04:15.766755 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:04:15.773668 master-0 kubenswrapper[7484]: I0312 21:04:15.773579 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:04:16.414489 master-0 kubenswrapper[7484]: I0312 21:04:16.414397 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:16.414489 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:16.414489 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:16.414489 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:16.414996 master-0 kubenswrapper[7484]: I0312 21:04:16.414518 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:17.414755 master-0 kubenswrapper[7484]: I0312 21:04:17.414668 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:17.414755 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:17.414755 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:17.414755 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:17.415664 master-0 kubenswrapper[7484]: I0312 21:04:17.414747 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:17.782602 master-0 kubenswrapper[7484]: I0312 21:04:17.782434 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-xzwfp_e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/control-plane-machine-set-operator/0.log" Mar 12 21:04:17.782602 master-0 kubenswrapper[7484]: I0312 21:04:17.782537 7484 generic.go:334] "Generic (PLEG): container finished" podID="e03d34d0-f7c1-4dcf-8b84-89ad647cc10f" containerID="5dd1e415f7dea320798ed071f084a01d7f961a59cb235657d89f90c5a715804d" exitCode=1 Mar 12 21:04:17.783152 master-0 kubenswrapper[7484]: I0312 21:04:17.782670 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" event={"ID":"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f","Type":"ContainerDied","Data":"5dd1e415f7dea320798ed071f084a01d7f961a59cb235657d89f90c5a715804d"} Mar 12 21:04:17.783554 master-0 kubenswrapper[7484]: I0312 21:04:17.783499 7484 scope.go:117] "RemoveContainer" containerID="5dd1e415f7dea320798ed071f084a01d7f961a59cb235657d89f90c5a715804d" Mar 12 21:04:18.414697 master-0 kubenswrapper[7484]: I0312 21:04:18.414599 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:18.414697 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:18.414697 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:18.414697 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:18.415963 master-0 kubenswrapper[7484]: I0312 21:04:18.414730 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:18.792679 master-0 kubenswrapper[7484]: I0312 21:04:18.792476 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-xzwfp_e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/control-plane-machine-set-operator/0.log" Mar 12 21:04:18.792679 master-0 kubenswrapper[7484]: I0312 21:04:18.792572 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" event={"ID":"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f","Type":"ContainerStarted","Data":"34a124ced022c1cedb7ed2f566e73051247855b090d9a68b409e835375db3ce0"} Mar 12 21:04:19.415284 master-0 kubenswrapper[7484]: I0312 21:04:19.415198 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:19.415284 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:19.415284 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:19.415284 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:19.416396 master-0 kubenswrapper[7484]: I0312 21:04:19.415292 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:20.414041 master-0 kubenswrapper[7484]: I0312 21:04:20.413864 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:20.414041 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:20.414041 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:20.414041 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:20.414041 master-0 kubenswrapper[7484]: I0312 21:04:20.413973 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:20.811517 master-0 kubenswrapper[7484]: I0312 21:04:20.811338 7484 generic.go:334] "Generic (PLEG): container finished" podID="d862a346-ec4d-46f6-a3e2-ea8759ea0111" containerID="29605d6c0d6bf29478ff9cad55789098714848ec2911515b3a1ba1a6b740cc37" exitCode=0 Mar 12 21:04:20.811517 master-0 kubenswrapper[7484]: I0312 21:04:20.811417 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerDied","Data":"29605d6c0d6bf29478ff9cad55789098714848ec2911515b3a1ba1a6b740cc37"} Mar 12 21:04:20.811517 master-0 kubenswrapper[7484]: I0312 21:04:20.811473 7484 scope.go:117] "RemoveContainer" containerID="36186e847a1c7ad015db1d456eab6f7fe52723f5ba9629a902598f1f75fcfbe7" Mar 12 21:04:20.812591 master-0 kubenswrapper[7484]: I0312 21:04:20.812447 7484 scope.go:117] "RemoveContainer" containerID="29605d6c0d6bf29478ff9cad55789098714848ec2911515b3a1ba1a6b740cc37" Mar 12 21:04:20.812890 master-0 kubenswrapper[7484]: E0312 21:04:20.812840 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-66b55d57d-vq95t_openshift-ovn-kubernetes(d862a346-ec4d-46f6-a3e2-ea8759ea0111)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" podUID="d862a346-ec4d-46f6-a3e2-ea8759ea0111" Mar 12 21:04:21.415519 master-0 kubenswrapper[7484]: I0312 21:04:21.415389 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:21.415519 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:21.415519 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:21.415519 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:21.416123 master-0 kubenswrapper[7484]: I0312 21:04:21.415538 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:22.414937 master-0 kubenswrapper[7484]: I0312 21:04:22.414849 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:22.414937 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:22.414937 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:22.414937 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:22.415897 master-0 kubenswrapper[7484]: I0312 21:04:22.414937 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:22.772899 master-0 kubenswrapper[7484]: I0312 21:04:22.772604 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:04:22.772899 master-0 kubenswrapper[7484]: I0312 21:04:22.772736 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:22.837267 master-0 kubenswrapper[7484]: I0312 21:04:22.837158 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 12 21:04:22.837999 master-0 kubenswrapper[7484]: I0312 21:04:22.837930 7484 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="30bd0d1ae984ab9c16e404ca61f305cdc008b61e24e3fa41bdfaeaa497182321" exitCode=1 Mar 12 21:04:22.838159 master-0 kubenswrapper[7484]: I0312 21:04:22.838001 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"30bd0d1ae984ab9c16e404ca61f305cdc008b61e24e3fa41bdfaeaa497182321"} Mar 12 21:04:22.838974 master-0 kubenswrapper[7484]: I0312 21:04:22.838922 7484 scope.go:117] "RemoveContainer" containerID="30bd0d1ae984ab9c16e404ca61f305cdc008b61e24e3fa41bdfaeaa497182321" Mar 12 21:04:23.377566 master-0 kubenswrapper[7484]: I0312 21:04:23.377513 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:04:23.378140 master-0 kubenswrapper[7484]: I0312 21:04:23.378099 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:04:23.413912 master-0 kubenswrapper[7484]: I0312 21:04:23.413853 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:23.413912 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:23.413912 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:23.413912 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:23.414242 master-0 kubenswrapper[7484]: I0312 21:04:23.413930 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:23.850346 master-0 kubenswrapper[7484]: I0312 21:04:23.850272 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 12 21:04:23.851342 master-0 kubenswrapper[7484]: I0312 21:04:23.851070 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"cca1a31a16c786b4a0358e88dbe17ead89f8ea362282d9e8446c5bfcda9a2898"} Mar 12 21:04:23.851511 master-0 kubenswrapper[7484]: I0312 21:04:23.851456 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:04:24.413260 master-0 kubenswrapper[7484]: I0312 21:04:24.413148 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:24.413260 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:24.413260 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:24.413260 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:24.413674 master-0 kubenswrapper[7484]: I0312 21:04:24.413274 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:24.523895 master-0 kubenswrapper[7484]: E0312 21:04:24.523835 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:25.414393 master-0 kubenswrapper[7484]: I0312 21:04:25.414313 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:25.414393 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:25.414393 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:25.414393 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:25.415336 master-0 kubenswrapper[7484]: I0312 21:04:25.414436 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:26.414138 master-0 kubenswrapper[7484]: I0312 21:04:26.414024 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:26.414138 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:26.414138 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:26.414138 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:26.414895 master-0 kubenswrapper[7484]: I0312 21:04:26.414165 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:27.414539 master-0 kubenswrapper[7484]: I0312 21:04:27.414407 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:27.414539 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:27.414539 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:27.414539 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:27.414539 master-0 kubenswrapper[7484]: I0312 21:04:27.414533 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:28.287856 master-0 kubenswrapper[7484]: E0312 21:04:28.287617 7484 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c33d4478b3c91 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:7678a2e61b792fe3be55b1c6f67b2aa2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:02:20.103523473 +0000 UTC m=+752.588792315,LastTimestamp:2026-03-12 21:02:20.103523473 +0000 UTC m=+752.588792315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:04:28.415163 master-0 kubenswrapper[7484]: I0312 21:04:28.415089 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:28.415163 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:28.415163 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:28.415163 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:28.416132 master-0 kubenswrapper[7484]: I0312 21:04:28.415173 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:29.414646 master-0 kubenswrapper[7484]: I0312 21:04:29.414558 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:29.414646 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:29.414646 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:29.414646 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:29.415856 master-0 kubenswrapper[7484]: I0312 21:04:29.414653 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:30.414686 master-0 kubenswrapper[7484]: I0312 21:04:30.414583 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:30.414686 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:30.414686 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:30.414686 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:30.414686 master-0 kubenswrapper[7484]: I0312 21:04:30.414682 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:31.414900 master-0 kubenswrapper[7484]: I0312 21:04:31.414777 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:31.414900 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:31.414900 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:31.414900 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:31.415993 master-0 kubenswrapper[7484]: I0312 21:04:31.414921 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:32.418629 master-0 kubenswrapper[7484]: I0312 21:04:32.418551 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:32.418629 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:32.418629 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:32.418629 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:32.419735 master-0 kubenswrapper[7484]: I0312 21:04:32.418641 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:32.590627 master-0 kubenswrapper[7484]: E0312 21:04:32.590063 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:04:32.734613 master-0 kubenswrapper[7484]: I0312 21:04:32.734326 7484 scope.go:117] "RemoveContainer" containerID="29605d6c0d6bf29478ff9cad55789098714848ec2911515b3a1ba1a6b740cc37" Mar 12 21:04:32.772627 master-0 kubenswrapper[7484]: I0312 21:04:32.772578 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:04:32.772892 master-0 kubenswrapper[7484]: I0312 21:04:32.772849 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:32.773076 master-0 kubenswrapper[7484]: I0312 21:04:32.773053 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:33.414365 master-0 kubenswrapper[7484]: I0312 21:04:33.414286 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:33.414365 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:33.414365 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:33.414365 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:33.414786 master-0 kubenswrapper[7484]: I0312 21:04:33.414387 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:33.589525 master-0 kubenswrapper[7484]: E0312 21:04:33.589383 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:33.943198 master-0 kubenswrapper[7484]: I0312 21:04:33.943143 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" event={"ID":"d862a346-ec4d-46f6-a3e2-ea8759ea0111","Type":"ContainerStarted","Data":"3f20112a0c3b20fac37b0e699c86c14b7385d094b89cf519c4cb3a8df7299867"} Mar 12 21:04:34.414916 master-0 kubenswrapper[7484]: I0312 21:04:34.414748 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:34.414916 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:34.414916 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:34.414916 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:34.415405 master-0 kubenswrapper[7484]: I0312 21:04:34.414895 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:34.524737 master-0 kubenswrapper[7484]: E0312 21:04:34.524614 7484 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:34.969172 master-0 kubenswrapper[7484]: I0312 21:04:34.969080 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"c526dbf7ac382686d170fe998cb948c25a4b677046ba65421a6b20f7b8069320"} Mar 12 21:04:34.969172 master-0 kubenswrapper[7484]: I0312 21:04:34.969167 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"4a3be27297fda6b8121c5fd145a33a08f85b4f6d139551bd4d8fd9681ff6723c"} Mar 12 21:04:34.976231 master-0 kubenswrapper[7484]: I0312 21:04:34.969198 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6af4e71895ff4fe118c23997aeb93f4e84c0f4154b54aa19f8abbc54a8539be2"} Mar 12 21:04:35.415190 master-0 kubenswrapper[7484]: I0312 21:04:35.415092 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:35.415190 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:35.415190 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:35.415190 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:35.415475 master-0 kubenswrapper[7484]: I0312 21:04:35.415187 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:35.992343 master-0 kubenswrapper[7484]: I0312 21:04:35.992228 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5dabe459737d88ce0a8534bf402fd762e6432002a626a37ebf731dead719fc05"} Mar 12 21:04:35.992343 master-0 kubenswrapper[7484]: I0312 21:04:35.992302 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5194be401cfedf1aa9a9ba57a34137d50e6645b8ccc15b839c616a43fc6af7a9"} Mar 12 21:04:35.994147 master-0 kubenswrapper[7484]: I0312 21:04:35.992694 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:04:35.994147 master-0 kubenswrapper[7484]: I0312 21:04:35.992727 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="4824c775-caec-441b-b5ae-9856954be691" Mar 12 21:04:36.415025 master-0 kubenswrapper[7484]: I0312 21:04:36.414938 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:36.415025 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:36.415025 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:36.415025 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:36.415536 master-0 kubenswrapper[7484]: I0312 21:04:36.415040 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:37.414965 master-0 kubenswrapper[7484]: I0312 21:04:37.414894 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:37.414965 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:37.414965 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:37.414965 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:37.416030 master-0 kubenswrapper[7484]: I0312 21:04:37.414978 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:38.020847 master-0 kubenswrapper[7484]: I0312 21:04:38.020727 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/3.log" Mar 12 21:04:38.021741 master-0 kubenswrapper[7484]: I0312 21:04:38.021692 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/2.log" Mar 12 21:04:38.024009 master-0 kubenswrapper[7484]: I0312 21:04:38.023956 7484 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="a8ece6a63e869b9a30f6f436409dac82b1a1fa49731dbcfd8d7578397d7622b2" exitCode=255 Mar 12 21:04:38.024148 master-0 kubenswrapper[7484]: I0312 21:04:38.024032 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"a8ece6a63e869b9a30f6f436409dac82b1a1fa49731dbcfd8d7578397d7622b2"} Mar 12 21:04:38.024148 master-0 kubenswrapper[7484]: I0312 21:04:38.024112 7484 scope.go:117] "RemoveContainer" containerID="f02c840b81a2d77bba25062b33d2959df737d0e9c53abeca566ed78c88468261" Mar 12 21:04:38.414944 master-0 kubenswrapper[7484]: I0312 21:04:38.414853 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:38.414944 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:38.414944 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:38.414944 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:38.416001 master-0 kubenswrapper[7484]: I0312 21:04:38.414965 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:38.770632 master-0 kubenswrapper[7484]: I0312 21:04:38.770482 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:38.770632 master-0 kubenswrapper[7484]: I0312 21:04:38.770543 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:39.035245 master-0 kubenswrapper[7484]: I0312 21:04:39.035077 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/3.log" Mar 12 21:04:39.414395 master-0 kubenswrapper[7484]: I0312 21:04:39.414311 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:39.414395 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:39.414395 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:39.414395 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:39.414840 master-0 kubenswrapper[7484]: I0312 21:04:39.414419 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:40.414701 master-0 kubenswrapper[7484]: I0312 21:04:40.414594 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:40.414701 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:40.414701 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:40.414701 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:40.416245 master-0 kubenswrapper[7484]: I0312 21:04:40.414727 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:40.675127 master-0 kubenswrapper[7484]: E0312 21:04:40.674962 7484 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:40.675617 master-0 kubenswrapper[7484]: I0312 21:04:40.675563 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a8ece6a63e869b9a30f6f436409dac82b1a1fa49731dbcfd8d7578397d7622b2"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 21:04:40.675738 master-0 kubenswrapper[7484]: I0312 21:04:40.675672 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://a8ece6a63e869b9a30f6f436409dac82b1a1fa49731dbcfd8d7578397d7622b2" gracePeriod=30 Mar 12 21:04:41.056483 master-0 kubenswrapper[7484]: I0312 21:04:41.056378 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/3.log" Mar 12 21:04:41.057262 master-0 kubenswrapper[7484]: I0312 21:04:41.057214 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/2.log" Mar 12 21:04:41.057667 master-0 kubenswrapper[7484]: I0312 21:04:41.057499 7484 generic.go:334] "Generic (PLEG): container finished" podID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" exitCode=1 Mar 12 21:04:41.057667 master-0 kubenswrapper[7484]: I0312 21:04:41.057631 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerDied","Data":"5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1"} Mar 12 21:04:41.057964 master-0 kubenswrapper[7484]: I0312 21:04:41.057701 7484 scope.go:117] "RemoveContainer" containerID="a61af5ddc801fc82532787a8099d3f864174adef92d53c028151cb9ec9d021a1" Mar 12 21:04:41.058771 master-0 kubenswrapper[7484]: I0312 21:04:41.058695 7484 scope.go:117] "RemoveContainer" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" Mar 12 21:04:41.059302 master-0 kubenswrapper[7484]: E0312 21:04:41.059208 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:04:41.063000 master-0 kubenswrapper[7484]: I0312 21:04:41.062942 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/3.log" Mar 12 21:04:41.066292 master-0 kubenswrapper[7484]: I0312 21:04:41.066217 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415"} Mar 12 21:04:41.066606 master-0 kubenswrapper[7484]: I0312 21:04:41.066571 7484 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:04:41.066606 master-0 kubenswrapper[7484]: I0312 21:04:41.066599 7484 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d635a2c1-7d6b-46e4-9267-3313bbe06e35" Mar 12 21:04:41.418989 master-0 kubenswrapper[7484]: I0312 21:04:41.418907 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:41.418989 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:41.418989 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:41.418989 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:41.420371 master-0 kubenswrapper[7484]: I0312 21:04:41.420310 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:42.077046 master-0 kubenswrapper[7484]: I0312 21:04:42.076970 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/3.log" Mar 12 21:04:42.413892 master-0 kubenswrapper[7484]: I0312 21:04:42.413626 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:42.413892 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:42.413892 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:42.413892 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:42.413892 master-0 kubenswrapper[7484]: I0312 21:04:42.413701 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:43.146812 master-0 kubenswrapper[7484]: I0312 21:04:43.146721 7484 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:43.147933 master-0 kubenswrapper[7484]: I0312 21:04:43.147885 7484 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:43.171667 master-0 kubenswrapper[7484]: I0312 21:04:43.171569 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:04:43.178185 master-0 kubenswrapper[7484]: I0312 21:04:43.178105 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 21:04:43.186007 master-0 kubenswrapper[7484]: I0312 21:04:43.185944 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:04:43.192486 master-0 kubenswrapper[7484]: I0312 21:04:43.192406 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 21:04:43.240041 master-0 kubenswrapper[7484]: I0312 21:04:43.211770 7484 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4824c775-caec-441b-b5ae-9856954be691\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T21:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T21:02:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [setup etcd-ensure-env-vars etcd-resources-copy]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T21:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcdctl etcd etcd-metrics etcd-readyz etcd-rev]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T21:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcdctl etcd etcd-metrics etcd-readyz etcd-rev]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-etcd\"/\"etcd-master-0\": pods \"etcd-master-0\" not found" Mar 12 21:04:43.414517 master-0 kubenswrapper[7484]: I0312 21:04:43.414346 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:43.414517 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:43.414517 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:43.414517 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:43.414517 master-0 kubenswrapper[7484]: I0312 21:04:43.414436 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:44.415967 master-0 kubenswrapper[7484]: I0312 21:04:44.414792 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:44.415967 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:44.415967 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:44.415967 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:44.415967 master-0 kubenswrapper[7484]: I0312 21:04:44.414964 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:45.415402 master-0 kubenswrapper[7484]: I0312 21:04:45.415338 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:04:45.415402 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:04:45.415402 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:04:45.415402 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:04:45.415918 master-0 kubenswrapper[7484]: I0312 21:04:45.415424 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:04:45.415918 master-0 kubenswrapper[7484]: I0312 21:04:45.415486 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:04:45.416276 master-0 kubenswrapper[7484]: I0312 21:04:45.416203 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"91d2028136276069b3430f01cdedfd621a7ff241728670fbdc4cdf16424e1832"} pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" containerMessage="Container router failed startup probe, will be restarted" Mar 12 21:04:45.416942 master-0 kubenswrapper[7484]: I0312 21:04:45.416272 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" containerID="cri-o://91d2028136276069b3430f01cdedfd621a7ff241728670fbdc4cdf16424e1832" gracePeriod=3600 Mar 12 21:04:48.803475 master-0 kubenswrapper[7484]: I0312 21:04:48.803399 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:49.591018 master-0 kubenswrapper[7484]: E0312 21:04:49.590920 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:04:49.771635 master-0 kubenswrapper[7484]: I0312 21:04:49.771553 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:49.771635 master-0 kubenswrapper[7484]: I0312 21:04:49.771624 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:49.945148 master-0 kubenswrapper[7484]: I0312 21:04:49.944962 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:04:49.950636 master-0 kubenswrapper[7484]: I0312 21:04:49.950552 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 12 21:04:50.152136 master-0 kubenswrapper[7484]: E0312 21:04:50.152071 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:04:50.158192 master-0 kubenswrapper[7484]: E0312 21:04:50.158127 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:50.163182 master-0 kubenswrapper[7484]: I0312 21:04:50.163144 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 21:04:50.200393 master-0 kubenswrapper[7484]: I0312 21:04:50.200201 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.200171987 podStartE2EDuration="1.200171987s" podCreationTimestamp="2026-03-12 21:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:04:50.194214892 +0000 UTC m=+902.679483734" watchObservedRunningTime="2026-03-12 21:04:50.200171987 +0000 UTC m=+902.685440829" Mar 12 21:04:52.772655 master-0 kubenswrapper[7484]: I0312 21:04:52.772575 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:04:52.773735 master-0 kubenswrapper[7484]: I0312 21:04:52.772678 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:04:53.733464 master-0 kubenswrapper[7484]: I0312 21:04:53.733357 7484 scope.go:117] "RemoveContainer" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" Mar 12 21:04:53.734020 master-0 kubenswrapper[7484]: E0312 21:04:53.733947 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:04:53.767638 master-0 kubenswrapper[7484]: I0312 21:04:53.767508 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=4.767479754 podStartE2EDuration="4.767479754s" podCreationTimestamp="2026-03-12 21:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:04:50.24538013 +0000 UTC m=+902.730648962" watchObservedRunningTime="2026-03-12 21:04:53.767479754 +0000 UTC m=+906.252748596" Mar 12 21:05:02.771526 master-0 kubenswrapper[7484]: I0312 21:05:02.771406 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:05:02.771526 master-0 kubenswrapper[7484]: I0312 21:05:02.771511 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:05:04.733435 master-0 kubenswrapper[7484]: I0312 21:05:04.733368 7484 scope.go:117] "RemoveContainer" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" Mar 12 21:05:04.734182 master-0 kubenswrapper[7484]: E0312 21:05:04.733763 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:05:06.591976 master-0 kubenswrapper[7484]: E0312 21:05:06.591890 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:05:11.502775 master-0 kubenswrapper[7484]: I0312 21:05:11.502597 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:45516->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 12 21:05:11.502775 master-0 kubenswrapper[7484]: I0312 21:05:11.502687 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:45516->127.0.0.1:10357: read: connection reset by peer" Mar 12 21:05:11.502775 master-0 kubenswrapper[7484]: I0312 21:05:11.502755 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:05:11.504338 master-0 kubenswrapper[7484]: I0312 21:05:11.503591 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 21:05:11.504338 master-0 kubenswrapper[7484]: I0312 21:05:11.503737 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" gracePeriod=30 Mar 12 21:05:11.528492 master-0 kubenswrapper[7484]: E0312 21:05:11.528429 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:05:12.318293 master-0 kubenswrapper[7484]: I0312 21:05:12.318215 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/4.log" Mar 12 21:05:12.319066 master-0 kubenswrapper[7484]: I0312 21:05:12.319022 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/3.log" Mar 12 21:05:12.320923 master-0 kubenswrapper[7484]: I0312 21:05:12.320803 7484 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" exitCode=255 Mar 12 21:05:12.321029 master-0 kubenswrapper[7484]: I0312 21:05:12.320945 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415"} Mar 12 21:05:12.321099 master-0 kubenswrapper[7484]: I0312 21:05:12.321035 7484 scope.go:117] "RemoveContainer" containerID="a8ece6a63e869b9a30f6f436409dac82b1a1fa49731dbcfd8d7578397d7622b2" Mar 12 21:05:12.322146 master-0 kubenswrapper[7484]: I0312 21:05:12.322097 7484 scope.go:117] "RemoveContainer" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" Mar 12 21:05:12.322554 master-0 kubenswrapper[7484]: E0312 21:05:12.322499 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:05:13.331562 master-0 kubenswrapper[7484]: I0312 21:05:13.331495 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/1.log" Mar 12 21:05:13.332704 master-0 kubenswrapper[7484]: I0312 21:05:13.332637 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/0.log" Mar 12 21:05:13.332846 master-0 kubenswrapper[7484]: I0312 21:05:13.332734 7484 generic.go:334] "Generic (PLEG): container finished" podID="17d2bb40-74e2-4894-a884-7018952bdf71" containerID="57afad4e3efc3237af416deb66bd4d026f0ff91e709bfe7cc68bb56bee784fe7" exitCode=1 Mar 12 21:05:13.332940 master-0 kubenswrapper[7484]: I0312 21:05:13.332863 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerDied","Data":"57afad4e3efc3237af416deb66bd4d026f0ff91e709bfe7cc68bb56bee784fe7"} Mar 12 21:05:13.333119 master-0 kubenswrapper[7484]: I0312 21:05:13.333072 7484 scope.go:117] "RemoveContainer" containerID="6dc411727752ae888d72d927bcde06522ded330928aadabe0e4e42b673281367" Mar 12 21:05:13.335032 master-0 kubenswrapper[7484]: I0312 21:05:13.333780 7484 scope.go:117] "RemoveContainer" containerID="57afad4e3efc3237af416deb66bd4d026f0ff91e709bfe7cc68bb56bee784fe7" Mar 12 21:05:13.335032 master-0 kubenswrapper[7484]: E0312 21:05:13.334360 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-fnxjc_openshift-machine-api(17d2bb40-74e2-4894-a884-7018952bdf71)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" podUID="17d2bb40-74e2-4894-a884-7018952bdf71" Mar 12 21:05:13.338752 master-0 kubenswrapper[7484]: I0312 21:05:13.338675 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/4.log" Mar 12 21:05:14.353306 master-0 kubenswrapper[7484]: I0312 21:05:14.353166 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/1.log" Mar 12 21:05:14.378394 master-0 kubenswrapper[7484]: I0312 21:05:14.378290 7484 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:05:14.378713 master-0 kubenswrapper[7484]: I0312 21:05:14.378317 7484 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:05:14.378713 master-0 kubenswrapper[7484]: I0312 21:05:14.378394 7484 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:05:14.378713 master-0 kubenswrapper[7484]: I0312 21:05:14.378528 7484 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:05:17.739372 master-0 kubenswrapper[7484]: I0312 21:05:17.739283 7484 scope.go:117] "RemoveContainer" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" Mar 12 21:05:17.740426 master-0 kubenswrapper[7484]: E0312 21:05:17.739629 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:05:19.771323 master-0 kubenswrapper[7484]: I0312 21:05:19.771227 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:05:19.772529 master-0 kubenswrapper[7484]: I0312 21:05:19.772468 7484 scope.go:117] "RemoveContainer" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" Mar 12 21:05:19.773113 master-0 kubenswrapper[7484]: E0312 21:05:19.773063 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:05:23.595172 master-0 kubenswrapper[7484]: E0312 21:05:23.594909 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:05:23.661857 master-0 kubenswrapper[7484]: I0312 21:05:23.661203 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:05:27.739975 master-0 kubenswrapper[7484]: I0312 21:05:27.739878 7484 scope.go:117] "RemoveContainer" containerID="57afad4e3efc3237af416deb66bd4d026f0ff91e709bfe7cc68bb56bee784fe7" Mar 12 21:05:28.487168 master-0 kubenswrapper[7484]: I0312 21:05:28.487119 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/1.log" Mar 12 21:05:28.488153 master-0 kubenswrapper[7484]: I0312 21:05:28.488079 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" event={"ID":"17d2bb40-74e2-4894-a884-7018952bdf71","Type":"ContainerStarted","Data":"6ae8e9611ac3353f79638bf44f9fa0420bcfc4e727f3fcedfad3615d2dcb4f78"} Mar 12 21:05:32.524428 master-0 kubenswrapper[7484]: I0312 21:05:32.524359 7484 generic.go:334] "Generic (PLEG): container finished" podID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerID="91d2028136276069b3430f01cdedfd621a7ff241728670fbdc4cdf16424e1832" exitCode=0 Mar 12 21:05:32.524428 master-0 kubenswrapper[7484]: I0312 21:05:32.524430 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerDied","Data":"91d2028136276069b3430f01cdedfd621a7ff241728670fbdc4cdf16424e1832"} Mar 12 21:05:32.525340 master-0 kubenswrapper[7484]: I0312 21:05:32.524475 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerStarted","Data":"e2916ee608198e843f503ac1b99774e97d332ea70158688e35693b97b4ee8573"} Mar 12 21:05:32.525340 master-0 kubenswrapper[7484]: I0312 21:05:32.524505 7484 scope.go:117] "RemoveContainer" containerID="1acfa9d2750b23b6fbd73dc65a33ac93a90684811b79c1a559d68754a4e63f2b" Mar 12 21:05:32.733313 master-0 kubenswrapper[7484]: I0312 21:05:32.733243 7484 scope.go:117] "RemoveContainer" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" Mar 12 21:05:32.733585 master-0 kubenswrapper[7484]: I0312 21:05:32.733449 7484 scope.go:117] "RemoveContainer" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" Mar 12 21:05:32.733726 master-0 kubenswrapper[7484]: E0312 21:05:32.733663 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:05:33.411320 master-0 kubenswrapper[7484]: I0312 21:05:33.411259 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:05:33.414475 master-0 kubenswrapper[7484]: I0312 21:05:33.414442 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:33.414475 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:33.414475 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:33.414475 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:33.414761 master-0 kubenswrapper[7484]: I0312 21:05:33.414734 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:33.535539 master-0 kubenswrapper[7484]: I0312 21:05:33.535445 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/3.log" Mar 12 21:05:33.538001 master-0 kubenswrapper[7484]: I0312 21:05:33.535618 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d"} Mar 12 21:05:34.414781 master-0 kubenswrapper[7484]: I0312 21:05:34.414656 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:34.414781 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:34.414781 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:34.414781 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:34.415424 master-0 kubenswrapper[7484]: I0312 21:05:34.414917 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:35.414575 master-0 kubenswrapper[7484]: I0312 21:05:35.414467 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:35.414575 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:35.414575 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:35.414575 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:35.415622 master-0 kubenswrapper[7484]: I0312 21:05:35.414572 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:36.416557 master-0 kubenswrapper[7484]: I0312 21:05:36.416444 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:36.416557 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:36.416557 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:36.416557 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:36.417777 master-0 kubenswrapper[7484]: I0312 21:05:36.416560 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:37.414525 master-0 kubenswrapper[7484]: I0312 21:05:37.414439 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:37.414525 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:37.414525 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:37.414525 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:37.415212 master-0 kubenswrapper[7484]: I0312 21:05:37.414533 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:38.414374 master-0 kubenswrapper[7484]: I0312 21:05:38.414267 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:38.414374 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:38.414374 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:38.414374 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:38.415387 master-0 kubenswrapper[7484]: I0312 21:05:38.414389 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:39.414479 master-0 kubenswrapper[7484]: I0312 21:05:39.414361 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:39.414479 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:39.414479 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:39.414479 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:39.414479 master-0 kubenswrapper[7484]: I0312 21:05:39.414438 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:40.411168 master-0 kubenswrapper[7484]: I0312 21:05:40.411086 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:05:40.414326 master-0 kubenswrapper[7484]: I0312 21:05:40.414291 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:40.414326 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:40.414326 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:40.414326 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:40.415165 master-0 kubenswrapper[7484]: I0312 21:05:40.415041 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:40.596535 master-0 kubenswrapper[7484]: E0312 21:05:40.596429 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:05:41.415462 master-0 kubenswrapper[7484]: I0312 21:05:41.415330 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:41.415462 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:41.415462 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:41.415462 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:41.416490 master-0 kubenswrapper[7484]: I0312 21:05:41.415496 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:42.414393 master-0 kubenswrapper[7484]: I0312 21:05:42.414284 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:42.414393 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:42.414393 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:42.414393 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:42.414393 master-0 kubenswrapper[7484]: I0312 21:05:42.414391 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:43.414052 master-0 kubenswrapper[7484]: I0312 21:05:43.413957 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:43.414052 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:43.414052 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:43.414052 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:43.415382 master-0 kubenswrapper[7484]: I0312 21:05:43.414070 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:44.415475 master-0 kubenswrapper[7484]: I0312 21:05:44.415359 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:44.415475 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:44.415475 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:44.415475 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:44.415475 master-0 kubenswrapper[7484]: I0312 21:05:44.415467 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:44.734759 master-0 kubenswrapper[7484]: I0312 21:05:44.734577 7484 scope.go:117] "RemoveContainer" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" Mar 12 21:05:44.735199 master-0 kubenswrapper[7484]: E0312 21:05:44.735128 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:05:45.414186 master-0 kubenswrapper[7484]: I0312 21:05:45.414059 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:45.414186 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:45.414186 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:45.414186 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:45.414186 master-0 kubenswrapper[7484]: I0312 21:05:45.414182 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:46.414718 master-0 kubenswrapper[7484]: I0312 21:05:46.414579 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:46.414718 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:46.414718 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:46.414718 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:46.414718 master-0 kubenswrapper[7484]: I0312 21:05:46.414665 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:46.651414 master-0 kubenswrapper[7484]: I0312 21:05:46.651325 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/4.log" Mar 12 21:05:46.652182 master-0 kubenswrapper[7484]: I0312 21:05:46.652131 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/3.log" Mar 12 21:05:46.652814 master-0 kubenswrapper[7484]: I0312 21:05:46.652740 7484 generic.go:334] "Generic (PLEG): container finished" podID="2b71f537-1cc2-4645-8e50-23941635457c" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" exitCode=1 Mar 12 21:05:46.652925 master-0 kubenswrapper[7484]: I0312 21:05:46.652821 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerDied","Data":"4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960"} Mar 12 21:05:46.652925 master-0 kubenswrapper[7484]: I0312 21:05:46.652912 7484 scope.go:117] "RemoveContainer" containerID="7eccf2e11fa509546de8eac1a0922463527e45037d75300978eef8469f91ea9d" Mar 12 21:05:46.653669 master-0 kubenswrapper[7484]: I0312 21:05:46.653621 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:05:46.654134 master-0 kubenswrapper[7484]: E0312 21:05:46.654074 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:05:47.414712 master-0 kubenswrapper[7484]: I0312 21:05:47.414621 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:47.414712 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:47.414712 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:47.414712 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:47.414712 master-0 kubenswrapper[7484]: I0312 21:05:47.414725 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:47.664087 master-0 kubenswrapper[7484]: I0312 21:05:47.664018 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/4.log" Mar 12 21:05:48.419967 master-0 kubenswrapper[7484]: I0312 21:05:48.415035 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:48.419967 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:48.419967 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:48.419967 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:48.419967 master-0 kubenswrapper[7484]: I0312 21:05:48.415113 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:49.415909 master-0 kubenswrapper[7484]: I0312 21:05:49.415782 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:49.415909 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:49.415909 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:49.415909 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:49.416541 master-0 kubenswrapper[7484]: I0312 21:05:49.415926 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:50.414640 master-0 kubenswrapper[7484]: I0312 21:05:50.414559 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:50.414640 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:50.414640 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:50.414640 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:50.415751 master-0 kubenswrapper[7484]: I0312 21:05:50.414661 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:51.415176 master-0 kubenswrapper[7484]: I0312 21:05:51.415082 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:51.415176 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:51.415176 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:51.415176 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:51.416185 master-0 kubenswrapper[7484]: I0312 21:05:51.415191 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:52.415462 master-0 kubenswrapper[7484]: I0312 21:05:52.415320 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:52.415462 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:52.415462 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:52.415462 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:52.416884 master-0 kubenswrapper[7484]: I0312 21:05:52.415507 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:53.414700 master-0 kubenswrapper[7484]: I0312 21:05:53.414605 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:53.414700 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:53.414700 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:53.414700 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:53.415186 master-0 kubenswrapper[7484]: I0312 21:05:53.414723 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:54.415613 master-0 kubenswrapper[7484]: I0312 21:05:54.415502 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:54.415613 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:54.415613 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:54.415613 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:54.415613 master-0 kubenswrapper[7484]: I0312 21:05:54.415609 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:55.414079 master-0 kubenswrapper[7484]: I0312 21:05:55.413995 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:55.414079 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:55.414079 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:55.414079 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:55.414855 master-0 kubenswrapper[7484]: I0312 21:05:55.414099 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:56.414100 master-0 kubenswrapper[7484]: I0312 21:05:56.414026 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:56.414100 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:56.414100 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:56.414100 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:56.415427 master-0 kubenswrapper[7484]: I0312 21:05:56.415361 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:56.734904 master-0 kubenswrapper[7484]: I0312 21:05:56.734653 7484 scope.go:117] "RemoveContainer" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" Mar 12 21:05:57.414597 master-0 kubenswrapper[7484]: I0312 21:05:57.414487 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:57.414597 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:57.414597 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:57.414597 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:57.415640 master-0 kubenswrapper[7484]: I0312 21:05:57.414618 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:57.598736 master-0 kubenswrapper[7484]: E0312 21:05:57.598637 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:05:57.740687 master-0 kubenswrapper[7484]: I0312 21:05:57.740531 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:05:57.744912 master-0 kubenswrapper[7484]: E0312 21:05:57.744784 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:05:57.748742 master-0 kubenswrapper[7484]: I0312 21:05:57.748683 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/4.log" Mar 12 21:05:57.751137 master-0 kubenswrapper[7484]: I0312 21:05:57.751078 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770"} Mar 12 21:05:58.415048 master-0 kubenswrapper[7484]: I0312 21:05:58.414980 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:58.415048 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:58.415048 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:58.415048 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:58.416035 master-0 kubenswrapper[7484]: I0312 21:05:58.415072 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:59.414462 master-0 kubenswrapper[7484]: I0312 21:05:59.414381 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:05:59.414462 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:05:59.414462 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:05:59.414462 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:05:59.414953 master-0 kubenswrapper[7484]: I0312 21:05:59.414498 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:05:59.801736 master-0 kubenswrapper[7484]: I0312 21:05:59.801585 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:05:59.801736 master-0 kubenswrapper[7484]: I0312 21:05:59.801674 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:06:00.414758 master-0 kubenswrapper[7484]: I0312 21:06:00.414632 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:00.414758 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:00.414758 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:00.414758 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:00.414758 master-0 kubenswrapper[7484]: I0312 21:06:00.414753 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:01.415142 master-0 kubenswrapper[7484]: I0312 21:06:01.415011 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:01.415142 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:01.415142 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:01.415142 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:01.416352 master-0 kubenswrapper[7484]: I0312 21:06:01.415153 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:02.415056 master-0 kubenswrapper[7484]: I0312 21:06:02.414880 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:02.415056 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:02.415056 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:02.415056 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:02.415056 master-0 kubenswrapper[7484]: I0312 21:06:02.415049 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:02.801981 master-0 kubenswrapper[7484]: I0312 21:06:02.801752 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:06:02.802276 master-0 kubenswrapper[7484]: I0312 21:06:02.801930 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:06:03.414261 master-0 kubenswrapper[7484]: I0312 21:06:03.414197 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:03.414261 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:03.414261 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:03.414261 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:03.414533 master-0 kubenswrapper[7484]: I0312 21:06:03.414283 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:03.842966 master-0 kubenswrapper[7484]: I0312 21:06:03.842902 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/4.log" Mar 12 21:06:03.843780 master-0 kubenswrapper[7484]: I0312 21:06:03.843680 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/3.log" Mar 12 21:06:03.843780 master-0 kubenswrapper[7484]: I0312 21:06:03.843760 7484 generic.go:334] "Generic (PLEG): container finished" podID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" exitCode=1 Mar 12 21:06:03.844042 master-0 kubenswrapper[7484]: I0312 21:06:03.843848 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerDied","Data":"b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d"} Mar 12 21:06:03.844042 master-0 kubenswrapper[7484]: I0312 21:06:03.843913 7484 scope.go:117] "RemoveContainer" containerID="5a81fc8b9aacee0a8e476883c80fb6479695566cab02e8f01e21f4a95878f5e1" Mar 12 21:06:03.844803 master-0 kubenswrapper[7484]: I0312 21:06:03.844727 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:06:03.845359 master-0 kubenswrapper[7484]: E0312 21:06:03.845277 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:06:04.414368 master-0 kubenswrapper[7484]: I0312 21:06:04.414275 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:04.414368 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:04.414368 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:04.414368 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:04.414792 master-0 kubenswrapper[7484]: I0312 21:06:04.414391 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:04.857534 master-0 kubenswrapper[7484]: I0312 21:06:04.857487 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/4.log" Mar 12 21:06:05.414249 master-0 kubenswrapper[7484]: I0312 21:06:05.414180 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:05.414249 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:05.414249 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:05.414249 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:05.414591 master-0 kubenswrapper[7484]: I0312 21:06:05.414275 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:06.414494 master-0 kubenswrapper[7484]: I0312 21:06:06.414407 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:06.414494 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:06.414494 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:06.414494 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:06.415558 master-0 kubenswrapper[7484]: I0312 21:06:06.414514 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:07.415441 master-0 kubenswrapper[7484]: I0312 21:06:07.415319 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:07.415441 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:07.415441 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:07.415441 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:07.416650 master-0 kubenswrapper[7484]: I0312 21:06:07.415446 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:08.414652 master-0 kubenswrapper[7484]: I0312 21:06:08.414559 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:08.414652 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:08.414652 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:08.414652 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:08.414652 master-0 kubenswrapper[7484]: I0312 21:06:08.414634 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:09.414074 master-0 kubenswrapper[7484]: I0312 21:06:09.413884 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:09.414074 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:09.414074 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:09.414074 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:09.415355 master-0 kubenswrapper[7484]: I0312 21:06:09.414116 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:10.415259 master-0 kubenswrapper[7484]: I0312 21:06:10.415162 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:10.415259 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:10.415259 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:10.415259 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:10.416220 master-0 kubenswrapper[7484]: I0312 21:06:10.415293 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:11.414207 master-0 kubenswrapper[7484]: I0312 21:06:11.414097 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:11.414207 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:11.414207 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:11.414207 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:11.414207 master-0 kubenswrapper[7484]: I0312 21:06:11.414193 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:12.413968 master-0 kubenswrapper[7484]: I0312 21:06:12.413862 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:12.413968 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:12.413968 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:12.413968 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:12.414867 master-0 kubenswrapper[7484]: I0312 21:06:12.413978 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:12.734576 master-0 kubenswrapper[7484]: I0312 21:06:12.734363 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:06:12.734951 master-0 kubenswrapper[7484]: E0312 21:06:12.734896 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:06:12.771427 master-0 kubenswrapper[7484]: I0312 21:06:12.771357 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:06:12.771712 master-0 kubenswrapper[7484]: I0312 21:06:12.771662 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:06:12.924126 master-0 kubenswrapper[7484]: I0312 21:06:12.924044 7484 generic.go:334] "Generic (PLEG): container finished" podID="96bd86df-2101-47f5-844b-1332261c66f1" containerID="249a7dffa361592f6c3fc3dfb8d871762e2347411c14fdf281e698f89aa84b04" exitCode=0 Mar 12 21:06:12.924756 master-0 kubenswrapper[7484]: I0312 21:06:12.924684 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" event={"ID":"96bd86df-2101-47f5-844b-1332261c66f1","Type":"ContainerDied","Data":"249a7dffa361592f6c3fc3dfb8d871762e2347411c14fdf281e698f89aa84b04"} Mar 12 21:06:12.925041 master-0 kubenswrapper[7484]: I0312 21:06:12.925011 7484 scope.go:117] "RemoveContainer" containerID="e6ccd74a2af6fdce722a0e3dca22b3f124868515fcf641e0b36f66e322f8d4c3" Mar 12 21:06:12.925977 master-0 kubenswrapper[7484]: I0312 21:06:12.925926 7484 scope.go:117] "RemoveContainer" containerID="249a7dffa361592f6c3fc3dfb8d871762e2347411c14fdf281e698f89aa84b04" Mar 12 21:06:13.414515 master-0 kubenswrapper[7484]: I0312 21:06:13.414324 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:13.414515 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:13.414515 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:13.414515 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:13.414515 master-0 kubenswrapper[7484]: I0312 21:06:13.414420 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:13.935638 master-0 kubenswrapper[7484]: I0312 21:06:13.935537 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" event={"ID":"96bd86df-2101-47f5-844b-1332261c66f1","Type":"ContainerStarted","Data":"dc94063ddbbcf2bfb7c1d6b41f66ae48010f107973022746b2fc570920f50598"} Mar 12 21:06:14.414862 master-0 kubenswrapper[7484]: I0312 21:06:14.414742 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:14.414862 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:14.414862 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:14.414862 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:14.415911 master-0 kubenswrapper[7484]: I0312 21:06:14.414896 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:14.600404 master-0 kubenswrapper[7484]: E0312 21:06:14.600274 7484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 12 21:06:15.413639 master-0 kubenswrapper[7484]: I0312 21:06:15.413575 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:15.413639 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:15.413639 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:15.413639 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:15.414182 master-0 kubenswrapper[7484]: I0312 21:06:15.413644 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:16.415041 master-0 kubenswrapper[7484]: I0312 21:06:16.414947 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:16.415041 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:16.415041 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:16.415041 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:16.416081 master-0 kubenswrapper[7484]: I0312 21:06:16.415050 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:16.734560 master-0 kubenswrapper[7484]: I0312 21:06:16.734331 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:06:16.734958 master-0 kubenswrapper[7484]: E0312 21:06:16.734804 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:06:17.414909 master-0 kubenswrapper[7484]: I0312 21:06:17.414794 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:17.414909 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:17.414909 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:17.414909 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:17.416214 master-0 kubenswrapper[7484]: I0312 21:06:17.414946 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:18.413973 master-0 kubenswrapper[7484]: I0312 21:06:18.413903 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:18.413973 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:18.413973 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:18.413973 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:18.414438 master-0 kubenswrapper[7484]: I0312 21:06:18.413999 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:19.415359 master-0 kubenswrapper[7484]: I0312 21:06:19.415264 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:19.415359 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:19.415359 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:19.415359 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:19.415359 master-0 kubenswrapper[7484]: I0312 21:06:19.415351 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:20.415951 master-0 kubenswrapper[7484]: I0312 21:06:20.414522 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:20.415951 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:20.415951 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:20.415951 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:20.415951 master-0 kubenswrapper[7484]: I0312 21:06:20.414601 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:21.029096 master-0 kubenswrapper[7484]: I0312 21:06:21.028919 7484 generic.go:334] "Generic (PLEG): container finished" podID="5471994f-769e-4124-b7d0-01f5358fc18f" containerID="a84299e61aaa1595e3e07b0769d34f43309447a83e058608971fd9878868932d" exitCode=0 Mar 12 21:06:21.029096 master-0 kubenswrapper[7484]: I0312 21:06:21.028996 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" event={"ID":"5471994f-769e-4124-b7d0-01f5358fc18f","Type":"ContainerDied","Data":"a84299e61aaa1595e3e07b0769d34f43309447a83e058608971fd9878868932d"} Mar 12 21:06:21.029096 master-0 kubenswrapper[7484]: I0312 21:06:21.029058 7484 scope.go:117] "RemoveContainer" containerID="7ca674391c532a062d85de3aad380be9933e23e79819377498f98ef87ee56f1c" Mar 12 21:06:21.029803 master-0 kubenswrapper[7484]: I0312 21:06:21.029738 7484 scope.go:117] "RemoveContainer" containerID="a84299e61aaa1595e3e07b0769d34f43309447a83e058608971fd9878868932d" Mar 12 21:06:21.415342 master-0 kubenswrapper[7484]: I0312 21:06:21.415222 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:21.415342 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:21.415342 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:21.415342 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:21.417043 master-0 kubenswrapper[7484]: I0312 21:06:21.415339 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:22.039612 master-0 kubenswrapper[7484]: I0312 21:06:22.039508 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" event={"ID":"5471994f-769e-4124-b7d0-01f5358fc18f","Type":"ContainerStarted","Data":"a790a5eecf490735d172d348cc9f5f9da39c3508567b54c4d0d8a0da2489dbc8"} Mar 12 21:06:22.416023 master-0 kubenswrapper[7484]: I0312 21:06:22.415906 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:22.416023 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:22.416023 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:22.416023 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:22.417006 master-0 kubenswrapper[7484]: I0312 21:06:22.416049 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:22.772429 master-0 kubenswrapper[7484]: I0312 21:06:22.772242 7484 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 21:06:22.772429 master-0 kubenswrapper[7484]: I0312 21:06:22.772357 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 21:06:22.772746 master-0 kubenswrapper[7484]: I0312 21:06:22.772449 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:06:22.773625 master-0 kubenswrapper[7484]: I0312 21:06:22.773559 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 21:06:22.773765 master-0 kubenswrapper[7484]: I0312 21:06:22.773727 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" gracePeriod=30 Mar 12 21:06:22.907184 master-0 kubenswrapper[7484]: E0312 21:06:22.907111 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:06:23.051459 master-0 kubenswrapper[7484]: I0312 21:06:23.051278 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:06:23.052482 master-0 kubenswrapper[7484]: I0312 21:06:23.052407 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/4.log" Mar 12 21:06:23.054717 master-0 kubenswrapper[7484]: I0312 21:06:23.054662 7484 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" exitCode=255 Mar 12 21:06:23.054868 master-0 kubenswrapper[7484]: I0312 21:06:23.054717 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770"} Mar 12 21:06:23.054868 master-0 kubenswrapper[7484]: I0312 21:06:23.054801 7484 scope.go:117] "RemoveContainer" containerID="5d18b29f3bf2e73b004074cecf13f56b4c1095226f815f265412069c3e307415" Mar 12 21:06:23.055994 master-0 kubenswrapper[7484]: I0312 21:06:23.055771 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:06:23.056804 master-0 kubenswrapper[7484]: E0312 21:06:23.056509 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:06:23.414874 master-0 kubenswrapper[7484]: I0312 21:06:23.414527 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:23.414874 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:23.414874 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:23.414874 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:23.414874 master-0 kubenswrapper[7484]: I0312 21:06:23.414634 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:24.066067 master-0 kubenswrapper[7484]: I0312 21:06:24.065959 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:06:24.070514 master-0 kubenswrapper[7484]: I0312 21:06:24.070424 7484 generic.go:334] "Generic (PLEG): container finished" podID="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" containerID="1d13c664a16a834bb594ce779624d3af44ce1b13763cae9c9fac074c11de4252" exitCode=0 Mar 12 21:06:24.070667 master-0 kubenswrapper[7484]: I0312 21:06:24.070504 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" event={"ID":"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c","Type":"ContainerDied","Data":"1d13c664a16a834bb594ce779624d3af44ce1b13763cae9c9fac074c11de4252"} Mar 12 21:06:24.070667 master-0 kubenswrapper[7484]: I0312 21:06:24.070579 7484 scope.go:117] "RemoveContainer" containerID="e0a2c06e46bef70f1a83d73f16311ff0724aeeddd6bc3dab0e6a4952ddc0acb3" Mar 12 21:06:24.071445 master-0 kubenswrapper[7484]: I0312 21:06:24.071380 7484 scope.go:117] "RemoveContainer" containerID="1d13c664a16a834bb594ce779624d3af44ce1b13763cae9c9fac074c11de4252" Mar 12 21:06:24.421076 master-0 kubenswrapper[7484]: I0312 21:06:24.420458 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:24.421076 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:24.421076 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:24.421076 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:24.421076 master-0 kubenswrapper[7484]: I0312 21:06:24.420595 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:25.083001 master-0 kubenswrapper[7484]: I0312 21:06:25.082915 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" event={"ID":"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c","Type":"ContainerStarted","Data":"bfbdb9e84cd755d114a3f211f3b30f783158af059829252c90ff94c229050767"} Mar 12 21:06:25.414249 master-0 kubenswrapper[7484]: I0312 21:06:25.414033 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:25.414249 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:25.414249 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:25.414249 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:25.414249 master-0 kubenswrapper[7484]: I0312 21:06:25.414116 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:25.733648 master-0 kubenswrapper[7484]: I0312 21:06:25.733459 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:06:25.733908 master-0 kubenswrapper[7484]: E0312 21:06:25.733862 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:06:26.414521 master-0 kubenswrapper[7484]: I0312 21:06:26.414399 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:26.414521 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:26.414521 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:26.414521 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:26.414521 master-0 kubenswrapper[7484]: I0312 21:06:26.414488 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:27.414771 master-0 kubenswrapper[7484]: I0312 21:06:27.414651 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:27.414771 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:27.414771 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:27.414771 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:27.415522 master-0 kubenswrapper[7484]: I0312 21:06:27.414797 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:28.111076 master-0 kubenswrapper[7484]: I0312 21:06:28.111004 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-sh67s_67e68ff0-f54d-4973-bbe7-ed43ce542bc0/machine-api-operator/0.log" Mar 12 21:06:28.111689 master-0 kubenswrapper[7484]: I0312 21:06:28.111644 7484 generic.go:334] "Generic (PLEG): container finished" podID="67e68ff0-f54d-4973-bbe7-ed43ce542bc0" containerID="b7d1be82f9f49361682b3eacda43c7c489bc2b5e8762684eea2266a906f1e97a" exitCode=255 Mar 12 21:06:28.111861 master-0 kubenswrapper[7484]: I0312 21:06:28.111707 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" event={"ID":"67e68ff0-f54d-4973-bbe7-ed43ce542bc0","Type":"ContainerDied","Data":"b7d1be82f9f49361682b3eacda43c7c489bc2b5e8762684eea2266a906f1e97a"} Mar 12 21:06:28.113063 master-0 kubenswrapper[7484]: I0312 21:06:28.113013 7484 scope.go:117] "RemoveContainer" containerID="b7d1be82f9f49361682b3eacda43c7c489bc2b5e8762684eea2266a906f1e97a" Mar 12 21:06:28.113659 master-0 kubenswrapper[7484]: I0312 21:06:28.113611 7484 generic.go:334] "Generic (PLEG): container finished" podID="900228dd-2d21-4759-87da-b027b0134ad8" containerID="1746524fbf252ae2860d518e4df6a02c7aaf28a067d9493a2d0daedd8741f97f" exitCode=0 Mar 12 21:06:28.113790 master-0 kubenswrapper[7484]: I0312 21:06:28.113658 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" event={"ID":"900228dd-2d21-4759-87da-b027b0134ad8","Type":"ContainerDied","Data":"1746524fbf252ae2860d518e4df6a02c7aaf28a067d9493a2d0daedd8741f97f"} Mar 12 21:06:28.113790 master-0 kubenswrapper[7484]: I0312 21:06:28.113698 7484 scope.go:117] "RemoveContainer" containerID="86833dd41b14e8094351920793b00866703e058d522b46fbdbf250fbcc14c834" Mar 12 21:06:28.115230 master-0 kubenswrapper[7484]: I0312 21:06:28.114844 7484 scope.go:117] "RemoveContainer" containerID="1746524fbf252ae2860d518e4df6a02c7aaf28a067d9493a2d0daedd8741f97f" Mar 12 21:06:28.117672 master-0 kubenswrapper[7484]: I0312 21:06:28.117541 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-r6rcq_b71376ea-e248-48fc-b2c4-1de7236ddd31/cluster-autoscaler-operator/0.log" Mar 12 21:06:28.119348 master-0 kubenswrapper[7484]: I0312 21:06:28.118482 7484 generic.go:334] "Generic (PLEG): container finished" podID="b71376ea-e248-48fc-b2c4-1de7236ddd31" containerID="1174e3de7390f133d9714b1c4e07a2aef601c6b39a42d38f1fea541e106e1fb1" exitCode=255 Mar 12 21:06:28.119497 master-0 kubenswrapper[7484]: I0312 21:06:28.119151 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" event={"ID":"b71376ea-e248-48fc-b2c4-1de7236ddd31","Type":"ContainerDied","Data":"1174e3de7390f133d9714b1c4e07a2aef601c6b39a42d38f1fea541e106e1fb1"} Mar 12 21:06:28.120200 master-0 kubenswrapper[7484]: I0312 21:06:28.120164 7484 scope.go:117] "RemoveContainer" containerID="1174e3de7390f133d9714b1c4e07a2aef601c6b39a42d38f1fea541e106e1fb1" Mar 12 21:06:28.140161 master-0 kubenswrapper[7484]: I0312 21:06:28.139056 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-62t2f_fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/network-operator/0.log" Mar 12 21:06:28.140161 master-0 kubenswrapper[7484]: I0312 21:06:28.139116 7484 generic.go:334] "Generic (PLEG): container finished" podID="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" containerID="72fca1fe5edaa514a27832ab602fe41af2b798cb5366c953a186e585a0605c57" exitCode=0 Mar 12 21:06:28.140161 master-0 kubenswrapper[7484]: I0312 21:06:28.139198 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" event={"ID":"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6","Type":"ContainerDied","Data":"72fca1fe5edaa514a27832ab602fe41af2b798cb5366c953a186e585a0605c57"} Mar 12 21:06:28.140161 master-0 kubenswrapper[7484]: I0312 21:06:28.139766 7484 scope.go:117] "RemoveContainer" containerID="72fca1fe5edaa514a27832ab602fe41af2b798cb5366c953a186e585a0605c57" Mar 12 21:06:28.141554 master-0 kubenswrapper[7484]: I0312 21:06:28.141495 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-69rp9_981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/cluster-node-tuning-operator/1.log" Mar 12 21:06:28.142373 master-0 kubenswrapper[7484]: I0312 21:06:28.142325 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-69rp9_981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/cluster-node-tuning-operator/0.log" Mar 12 21:06:28.142520 master-0 kubenswrapper[7484]: I0312 21:06:28.142407 7484 generic.go:334] "Generic (PLEG): container finished" podID="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" containerID="1152dcaad32a43ba9e378941f51d853a2e7fc508d86ad05335f3c348f68fdd30" exitCode=1 Mar 12 21:06:28.142620 master-0 kubenswrapper[7484]: I0312 21:06:28.142570 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" event={"ID":"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9","Type":"ContainerDied","Data":"1152dcaad32a43ba9e378941f51d853a2e7fc508d86ad05335f3c348f68fdd30"} Mar 12 21:06:28.143347 master-0 kubenswrapper[7484]: I0312 21:06:28.143299 7484 scope.go:117] "RemoveContainer" containerID="1152dcaad32a43ba9e378941f51d853a2e7fc508d86ad05335f3c348f68fdd30" Mar 12 21:06:28.149215 master-0 kubenswrapper[7484]: I0312 21:06:28.149084 7484 generic.go:334] "Generic (PLEG): container finished" podID="2604b035-853c-42b7-a562-07d46178868a" containerID="4c1c1c1b8851a87caaa47906af218c648432043d5537dde4d7c6aa9df599a39a" exitCode=0 Mar 12 21:06:28.149215 master-0 kubenswrapper[7484]: I0312 21:06:28.149150 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" event={"ID":"2604b035-853c-42b7-a562-07d46178868a","Type":"ContainerDied","Data":"4c1c1c1b8851a87caaa47906af218c648432043d5537dde4d7c6aa9df599a39a"} Mar 12 21:06:28.150139 master-0 kubenswrapper[7484]: I0312 21:06:28.149767 7484 scope.go:117] "RemoveContainer" containerID="4c1c1c1b8851a87caaa47906af218c648432043d5537dde4d7c6aa9df599a39a" Mar 12 21:06:28.150716 master-0 kubenswrapper[7484]: I0312 21:06:28.150516 7484 scope.go:117] "RemoveContainer" containerID="d9fa8a123cfb8c14404c75a08b2365da17bc3d4b0cf2e193ac612689b8a4fc37" Mar 12 21:06:28.152331 master-0 kubenswrapper[7484]: I0312 21:06:28.152076 7484 generic.go:334] "Generic (PLEG): container finished" podID="7f3afe47-c537-420c-b5be-1cad612e119d" containerID="36e67678697aff60b4f84c6384733c369857b33eb259f71b1dbb059fc06204fb" exitCode=0 Mar 12 21:06:28.152331 master-0 kubenswrapper[7484]: I0312 21:06:28.152156 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" event={"ID":"7f3afe47-c537-420c-b5be-1cad612e119d","Type":"ContainerDied","Data":"36e67678697aff60b4f84c6384733c369857b33eb259f71b1dbb059fc06204fb"} Mar 12 21:06:28.152783 master-0 kubenswrapper[7484]: I0312 21:06:28.152735 7484 scope.go:117] "RemoveContainer" containerID="36e67678697aff60b4f84c6384733c369857b33eb259f71b1dbb059fc06204fb" Mar 12 21:06:28.157703 master-0 kubenswrapper[7484]: I0312 21:06:28.157672 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-f62j6_a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/service-ca-operator/1.log" Mar 12 21:06:28.157787 master-0 kubenswrapper[7484]: I0312 21:06:28.157741 7484 generic.go:334] "Generic (PLEG): container finished" podID="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" containerID="083e8e2171f84572bdd5f30426ffba317f16817f3ae58d7c00019c197700b69d" exitCode=0 Mar 12 21:06:28.157883 master-0 kubenswrapper[7484]: I0312 21:06:28.157802 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerDied","Data":"083e8e2171f84572bdd5f30426ffba317f16817f3ae58d7c00019c197700b69d"} Mar 12 21:06:28.159106 master-0 kubenswrapper[7484]: I0312 21:06:28.159063 7484 scope.go:117] "RemoveContainer" containerID="083e8e2171f84572bdd5f30426ffba317f16817f3ae58d7c00019c197700b69d" Mar 12 21:06:28.161304 master-0 kubenswrapper[7484]: I0312 21:06:28.161255 7484 generic.go:334] "Generic (PLEG): container finished" podID="508cb83e-6f25-4235-8c56-b25b762ebcad" containerID="b9da34034a4775625020d205d9436694d65b54d0723190096309ce81aab32e93" exitCode=0 Mar 12 21:06:28.161394 master-0 kubenswrapper[7484]: I0312 21:06:28.161360 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" event={"ID":"508cb83e-6f25-4235-8c56-b25b762ebcad","Type":"ContainerDied","Data":"b9da34034a4775625020d205d9436694d65b54d0723190096309ce81aab32e93"} Mar 12 21:06:28.162410 master-0 kubenswrapper[7484]: I0312 21:06:28.162365 7484 scope.go:117] "RemoveContainer" containerID="b9da34034a4775625020d205d9436694d65b54d0723190096309ce81aab32e93" Mar 12 21:06:28.166384 master-0 kubenswrapper[7484]: I0312 21:06:28.166338 7484 generic.go:334] "Generic (PLEG): container finished" podID="135ec6f3-fbc0-4840-a4b1-c1124c705161" containerID="46ded837719c01c62e0a027c72064dacb46bd2417ff8fe1a0f12a339ce0c296a" exitCode=0 Mar 12 21:06:28.166579 master-0 kubenswrapper[7484]: I0312 21:06:28.166433 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" event={"ID":"135ec6f3-fbc0-4840-a4b1-c1124c705161","Type":"ContainerDied","Data":"46ded837719c01c62e0a027c72064dacb46bd2417ff8fe1a0f12a339ce0c296a"} Mar 12 21:06:28.167493 master-0 kubenswrapper[7484]: I0312 21:06:28.167451 7484 scope.go:117] "RemoveContainer" containerID="46ded837719c01c62e0a027c72064dacb46bd2417ff8fe1a0f12a339ce0c296a" Mar 12 21:06:28.171669 master-0 kubenswrapper[7484]: I0312 21:06:28.171621 7484 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="eb233dad973c14b986649aa9671fed2fa87adb0d7e06e94ac63133ff5838cbbe" exitCode=0 Mar 12 21:06:28.171789 master-0 kubenswrapper[7484]: I0312 21:06:28.171757 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerDied","Data":"eb233dad973c14b986649aa9671fed2fa87adb0d7e06e94ac63133ff5838cbbe"} Mar 12 21:06:28.172459 master-0 kubenswrapper[7484]: I0312 21:06:28.172427 7484 scope.go:117] "RemoveContainer" containerID="eb233dad973c14b986649aa9671fed2fa87adb0d7e06e94ac63133ff5838cbbe" Mar 12 21:06:28.174533 master-0 kubenswrapper[7484]: I0312 21:06:28.174472 7484 generic.go:334] "Generic (PLEG): container finished" podID="90f0e4da-71d4-4c4e-a2fc-9ef588daaf72" containerID="abe372f4a5201ee9f2be20bd5b5a3dc0db95881ce3285f6e1c8475b0ef9714a6" exitCode=0 Mar 12 21:06:28.174628 master-0 kubenswrapper[7484]: I0312 21:06:28.174563 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" event={"ID":"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72","Type":"ContainerDied","Data":"abe372f4a5201ee9f2be20bd5b5a3dc0db95881ce3285f6e1c8475b0ef9714a6"} Mar 12 21:06:28.174992 master-0 kubenswrapper[7484]: I0312 21:06:28.174963 7484 scope.go:117] "RemoveContainer" containerID="abe372f4a5201ee9f2be20bd5b5a3dc0db95881ce3285f6e1c8475b0ef9714a6" Mar 12 21:06:28.179005 master-0 kubenswrapper[7484]: I0312 21:06:28.178961 7484 generic.go:334] "Generic (PLEG): container finished" podID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerID="812a4d4164b66d6dc3ca8693d14eb3fcdb3c84deb2faed8cede318f4eacda9e5" exitCode=0 Mar 12 21:06:28.179072 master-0 kubenswrapper[7484]: I0312 21:06:28.179035 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerDied","Data":"812a4d4164b66d6dc3ca8693d14eb3fcdb3c84deb2faed8cede318f4eacda9e5"} Mar 12 21:06:28.179505 master-0 kubenswrapper[7484]: I0312 21:06:28.179465 7484 scope.go:117] "RemoveContainer" containerID="812a4d4164b66d6dc3ca8693d14eb3fcdb3c84deb2faed8cede318f4eacda9e5" Mar 12 21:06:28.192365 master-0 kubenswrapper[7484]: I0312 21:06:28.190057 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-vp2hs_7623a5c6-47a9-4b75-bda8-c0a2d7c67272/openshift-controller-manager-operator/1.log" Mar 12 21:06:28.192365 master-0 kubenswrapper[7484]: I0312 21:06:28.190133 7484 generic.go:334] "Generic (PLEG): container finished" podID="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" containerID="d768bc84b40192023bb465579879b2b58033844ecac405b3a22bcb789eb76d17" exitCode=0 Mar 12 21:06:28.192365 master-0 kubenswrapper[7484]: I0312 21:06:28.190175 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerDied","Data":"d768bc84b40192023bb465579879b2b58033844ecac405b3a22bcb789eb76d17"} Mar 12 21:06:28.192365 master-0 kubenswrapper[7484]: I0312 21:06:28.190991 7484 scope.go:117] "RemoveContainer" containerID="d768bc84b40192023bb465579879b2b58033844ecac405b3a22bcb789eb76d17" Mar 12 21:06:28.244195 master-0 kubenswrapper[7484]: I0312 21:06:28.244155 7484 scope.go:117] "RemoveContainer" containerID="ab35500d408324bc8f259a25814698a0950deafc4c75bcf972576200d718f280" Mar 12 21:06:28.339692 master-0 kubenswrapper[7484]: I0312 21:06:28.339651 7484 scope.go:117] "RemoveContainer" containerID="6afc544c34ddbc5e6039dbdbeff607333e002100669f75e0bf5ff219b035f729" Mar 12 21:06:28.427475 master-0 kubenswrapper[7484]: I0312 21:06:28.421578 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:28.427475 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:28.427475 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:28.427475 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:28.427475 master-0 kubenswrapper[7484]: I0312 21:06:28.421642 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:28.580940 master-0 kubenswrapper[7484]: I0312 21:06:28.580021 7484 scope.go:117] "RemoveContainer" containerID="47c0e0d21aabebc91fcbee939e9b068c6a5287ab73aa0a38e830a0c4a7aa5051" Mar 12 21:06:28.662381 master-0 kubenswrapper[7484]: I0312 21:06:28.662142 7484 scope.go:117] "RemoveContainer" containerID="15d0d26804c9c80b6799cf88166882aaa90b3995069ea002665cca02980190e3" Mar 12 21:06:28.693731 master-0 kubenswrapper[7484]: I0312 21:06:28.693690 7484 scope.go:117] "RemoveContainer" containerID="bd647ed768dc3b1c577a2e60500ea1b4e6063ec0776cd15c9345ee26565e55c6" Mar 12 21:06:28.761276 master-0 kubenswrapper[7484]: I0312 21:06:28.761219 7484 scope.go:117] "RemoveContainer" containerID="9fe9854a1e57408e0f50e0954b9dd49841bab1b9d1e76d61252c031948eff8b1" Mar 12 21:06:28.825780 master-0 kubenswrapper[7484]: I0312 21:06:28.825649 7484 scope.go:117] "RemoveContainer" containerID="1726ad62deed5adf886b68145fe6223edb7fe9f83fb593561c0b8bdb5aef13cf" Mar 12 21:06:29.197149 master-0 kubenswrapper[7484]: I0312 21:06:29.196917 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" event={"ID":"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72","Type":"ContainerStarted","Data":"d47935624d8ed8421bdc4675671917703554808ea74294339cbb649dac992f35"} Mar 12 21:06:29.199461 master-0 kubenswrapper[7484]: I0312 21:06:29.199433 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-r6rcq_b71376ea-e248-48fc-b2c4-1de7236ddd31/cluster-autoscaler-operator/0.log" Mar 12 21:06:29.199757 master-0 kubenswrapper[7484]: I0312 21:06:29.199723 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" event={"ID":"b71376ea-e248-48fc-b2c4-1de7236ddd31","Type":"ContainerStarted","Data":"03339eff8ba321135d5ac05983c34838509850908e8f1f2338f0479b2160441b"} Mar 12 21:06:29.201442 master-0 kubenswrapper[7484]: I0312 21:06:29.201388 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" event={"ID":"135ec6f3-fbc0-4840-a4b1-c1124c705161","Type":"ContainerStarted","Data":"ad11a30f77a638c88343b8ba2f0ffe40e338c9fa424ecb4d4a928fab78e6bfa8"} Mar 12 21:06:29.203359 master-0 kubenswrapper[7484]: I0312 21:06:29.203332 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-69rp9_981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/cluster-node-tuning-operator/1.log" Mar 12 21:06:29.203429 master-0 kubenswrapper[7484]: I0312 21:06:29.203382 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" event={"ID":"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9","Type":"ContainerStarted","Data":"d2aa68c155ceb89fce45527ca963689512aab84cbbc2ad0ca6a35210b7d8a217"} Mar 12 21:06:29.205304 master-0 kubenswrapper[7484]: I0312 21:06:29.205243 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" event={"ID":"2604b035-853c-42b7-a562-07d46178868a","Type":"ContainerStarted","Data":"1b59613d755edee98ed40adf1de50ead9fa59acde022f648e0234e657edab491"} Mar 12 21:06:29.206917 master-0 kubenswrapper[7484]: I0312 21:06:29.206866 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" event={"ID":"900228dd-2d21-4759-87da-b027b0134ad8","Type":"ContainerStarted","Data":"8c54ae92246758bc65eb4a5167d6e60ecae90eba00a74bd4d443c7c63c856d57"} Mar 12 21:06:29.208322 master-0 kubenswrapper[7484]: I0312 21:06:29.208290 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" event={"ID":"7623a5c6-47a9-4b75-bda8-c0a2d7c67272","Type":"ContainerStarted","Data":"1c16ca3652a7649cd9d913c3200390d79ca39fb2347275e518ed0819e88c512e"} Mar 12 21:06:29.210413 master-0 kubenswrapper[7484]: I0312 21:06:29.210378 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" event={"ID":"226cb3a1-984f-4410-96e6-c007131dc074","Type":"ContainerStarted","Data":"6264b6e4a58a33b13513599310dd3c795f74679379bc1ba248c7473638b7822e"} Mar 12 21:06:29.212137 master-0 kubenswrapper[7484]: I0312 21:06:29.212101 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" event={"ID":"508cb83e-6f25-4235-8c56-b25b762ebcad","Type":"ContainerStarted","Data":"839eddcb31f783e2d90ccbd81282ffae82d5ea4a144a4572a85305d31e434ce1"} Mar 12 21:06:29.213512 master-0 kubenswrapper[7484]: I0312 21:06:29.213484 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" event={"ID":"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6","Type":"ContainerStarted","Data":"c599ddb0abed3c90568a952c5ea476704b28d4333d61469b30ac9b6154e2a72c"} Mar 12 21:06:29.214977 master-0 kubenswrapper[7484]: I0312 21:06:29.214947 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-sh67s_67e68ff0-f54d-4973-bbe7-ed43ce542bc0/machine-api-operator/0.log" Mar 12 21:06:29.215469 master-0 kubenswrapper[7484]: I0312 21:06:29.215438 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" event={"ID":"67e68ff0-f54d-4973-bbe7-ed43ce542bc0","Type":"ContainerStarted","Data":"6fcc96f728be31083bbcb91ab16d68944a362af6ea58861b68aaf15558965211"} Mar 12 21:06:29.217038 master-0 kubenswrapper[7484]: I0312 21:06:29.217010 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" event={"ID":"980191fe-c62c-4b9e-879c-38fa8ce0a58b","Type":"ContainerStarted","Data":"4da15c605fa1dc10da47b476c92fc8b171ea2b078eac80583a7b2a812d4d6d26"} Mar 12 21:06:29.217243 master-0 kubenswrapper[7484]: I0312 21:06:29.217217 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:06:29.219343 master-0 kubenswrapper[7484]: I0312 21:06:29.219081 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" event={"ID":"7f3afe47-c537-420c-b5be-1cad612e119d","Type":"ContainerStarted","Data":"3e741695a46ad3b9a5374020ab836a070f35f694af3dc465a71413a403bd6da5"} Mar 12 21:06:29.221640 master-0 kubenswrapper[7484]: I0312 21:06:29.221599 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" event={"ID":"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d","Type":"ContainerStarted","Data":"b73401025760d03f74ea648c7694a40e5d3f30be761b04777b7aedc811ae35bb"} Mar 12 21:06:29.414659 master-0 kubenswrapper[7484]: I0312 21:06:29.414612 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:29.414659 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:29.414659 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:29.414659 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:29.414659 master-0 kubenswrapper[7484]: I0312 21:06:29.414663 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:29.799772 master-0 kubenswrapper[7484]: I0312 21:06:29.799693 7484 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:06:29.800463 master-0 kubenswrapper[7484]: I0312 21:06:29.800438 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:06:29.800748 master-0 kubenswrapper[7484]: E0312 21:06:29.800700 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:06:30.414956 master-0 kubenswrapper[7484]: I0312 21:06:30.414802 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:30.414956 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:30.414956 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:30.414956 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:30.415404 master-0 kubenswrapper[7484]: I0312 21:06:30.414957 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:30.734531 master-0 kubenswrapper[7484]: I0312 21:06:30.734384 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:06:30.734861 master-0 kubenswrapper[7484]: E0312 21:06:30.734776 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:06:31.414310 master-0 kubenswrapper[7484]: I0312 21:06:31.414209 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:31.414310 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:31.414310 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:31.414310 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:31.414882 master-0 kubenswrapper[7484]: I0312 21:06:31.414323 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:32.415396 master-0 kubenswrapper[7484]: I0312 21:06:32.415306 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:32.415396 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:32.415396 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:32.415396 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:32.416367 master-0 kubenswrapper[7484]: I0312 21:06:32.415422 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:32.973742 master-0 kubenswrapper[7484]: I0312 21:06:32.973650 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:06:33.415436 master-0 kubenswrapper[7484]: I0312 21:06:33.415308 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:33.415436 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:33.415436 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:33.415436 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:33.415436 master-0 kubenswrapper[7484]: I0312 21:06:33.415413 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:34.413915 master-0 kubenswrapper[7484]: I0312 21:06:34.413801 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:34.413915 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:34.413915 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:34.413915 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:34.414407 master-0 kubenswrapper[7484]: I0312 21:06:34.413950 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:35.414388 master-0 kubenswrapper[7484]: I0312 21:06:35.414301 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:35.414388 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:35.414388 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:35.414388 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:35.415357 master-0 kubenswrapper[7484]: I0312 21:06:35.414417 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:36.415019 master-0 kubenswrapper[7484]: I0312 21:06:36.414900 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:36.415019 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:36.415019 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:36.415019 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:36.415019 master-0 kubenswrapper[7484]: I0312 21:06:36.415007 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:37.414236 master-0 kubenswrapper[7484]: I0312 21:06:37.414174 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:37.414236 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:37.414236 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:37.414236 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:37.414769 master-0 kubenswrapper[7484]: I0312 21:06:37.414726 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:38.413714 master-0 kubenswrapper[7484]: I0312 21:06:38.413649 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:38.413714 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:38.413714 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:38.413714 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:38.414305 master-0 kubenswrapper[7484]: I0312 21:06:38.413732 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:38.732885 master-0 kubenswrapper[7484]: I0312 21:06:38.732752 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:06:38.733082 master-0 kubenswrapper[7484]: E0312 21:06:38.733000 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:06:39.413671 master-0 kubenswrapper[7484]: I0312 21:06:39.413620 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:39.413671 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:39.413671 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:39.413671 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:39.414244 master-0 kubenswrapper[7484]: I0312 21:06:39.413696 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:40.414797 master-0 kubenswrapper[7484]: I0312 21:06:40.414712 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:40.414797 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:40.414797 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:40.414797 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:40.416056 master-0 kubenswrapper[7484]: I0312 21:06:40.414836 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:41.414798 master-0 kubenswrapper[7484]: I0312 21:06:41.414711 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:41.414798 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:41.414798 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:41.414798 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:41.414798 master-0 kubenswrapper[7484]: I0312 21:06:41.414834 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:41.733700 master-0 kubenswrapper[7484]: I0312 21:06:41.733466 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:06:41.734011 master-0 kubenswrapper[7484]: E0312 21:06:41.733973 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:06:42.414594 master-0 kubenswrapper[7484]: I0312 21:06:42.414528 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:42.414594 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:42.414594 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:42.414594 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:42.415599 master-0 kubenswrapper[7484]: I0312 21:06:42.414617 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:43.415019 master-0 kubenswrapper[7484]: I0312 21:06:43.414906 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:43.415019 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:43.415019 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:43.415019 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:43.416017 master-0 kubenswrapper[7484]: I0312 21:06:43.415032 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:43.733730 master-0 kubenswrapper[7484]: I0312 21:06:43.733566 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:06:43.733997 master-0 kubenswrapper[7484]: E0312 21:06:43.733932 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:06:44.415018 master-0 kubenswrapper[7484]: I0312 21:06:44.414901 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:44.415018 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:44.415018 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:44.415018 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:44.416128 master-0 kubenswrapper[7484]: I0312 21:06:44.415026 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:45.415561 master-0 kubenswrapper[7484]: I0312 21:06:45.415443 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:45.415561 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:45.415561 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:45.415561 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:45.416511 master-0 kubenswrapper[7484]: I0312 21:06:45.415640 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:46.414200 master-0 kubenswrapper[7484]: I0312 21:06:46.414091 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:46.414200 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:46.414200 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:46.414200 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:46.415004 master-0 kubenswrapper[7484]: I0312 21:06:46.414219 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:47.413711 master-0 kubenswrapper[7484]: I0312 21:06:47.413593 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:47.413711 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:47.413711 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:47.413711 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:47.413711 master-0 kubenswrapper[7484]: I0312 21:06:47.413704 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:48.414282 master-0 kubenswrapper[7484]: I0312 21:06:48.414179 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:48.414282 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:48.414282 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:48.414282 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:48.415171 master-0 kubenswrapper[7484]: I0312 21:06:48.414304 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:49.413413 master-0 kubenswrapper[7484]: I0312 21:06:49.413294 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:49.413413 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:49.413413 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:49.413413 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:49.413841 master-0 kubenswrapper[7484]: I0312 21:06:49.413433 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:49.734232 master-0 kubenswrapper[7484]: I0312 21:06:49.734124 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:06:49.734678 master-0 kubenswrapper[7484]: E0312 21:06:49.734551 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:06:50.414938 master-0 kubenswrapper[7484]: I0312 21:06:50.414866 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:50.414938 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:50.414938 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:50.414938 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:50.415236 master-0 kubenswrapper[7484]: I0312 21:06:50.414949 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:51.413927 master-0 kubenswrapper[7484]: I0312 21:06:51.413829 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:51.413927 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:51.413927 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:51.413927 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:51.414710 master-0 kubenswrapper[7484]: I0312 21:06:51.413952 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:52.415070 master-0 kubenswrapper[7484]: I0312 21:06:52.414962 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:52.415070 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:52.415070 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:52.415070 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:52.416409 master-0 kubenswrapper[7484]: I0312 21:06:52.415072 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:52.734123 master-0 kubenswrapper[7484]: I0312 21:06:52.733946 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:06:52.734576 master-0 kubenswrapper[7484]: E0312 21:06:52.734515 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:06:53.414965 master-0 kubenswrapper[7484]: I0312 21:06:53.414887 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:53.414965 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:53.414965 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:53.414965 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:53.415969 master-0 kubenswrapper[7484]: I0312 21:06:53.415006 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:54.414887 master-0 kubenswrapper[7484]: I0312 21:06:54.414784 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:54.414887 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:54.414887 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:54.414887 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:54.414887 master-0 kubenswrapper[7484]: I0312 21:06:54.414873 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:55.415300 master-0 kubenswrapper[7484]: I0312 21:06:55.415224 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:55.415300 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:55.415300 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:55.415300 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:55.416052 master-0 kubenswrapper[7484]: I0312 21:06:55.415309 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:56.415563 master-0 kubenswrapper[7484]: I0312 21:06:56.415492 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:56.415563 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:56.415563 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:56.415563 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:56.416960 master-0 kubenswrapper[7484]: I0312 21:06:56.416901 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:57.414861 master-0 kubenswrapper[7484]: I0312 21:06:57.414775 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:57.414861 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:57.414861 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:57.414861 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:57.415498 master-0 kubenswrapper[7484]: I0312 21:06:57.415411 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:57.740367 master-0 kubenswrapper[7484]: I0312 21:06:57.740183 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:06:57.742315 master-0 kubenswrapper[7484]: E0312 21:06:57.742222 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:06:58.414396 master-0 kubenswrapper[7484]: I0312 21:06:58.414309 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:58.414396 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:58.414396 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:58.414396 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:58.415115 master-0 kubenswrapper[7484]: I0312 21:06:58.414411 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:06:59.413966 master-0 kubenswrapper[7484]: I0312 21:06:59.413882 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:06:59.413966 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:06:59.413966 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:06:59.413966 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:06:59.415110 master-0 kubenswrapper[7484]: I0312 21:06:59.413980 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:00.414794 master-0 kubenswrapper[7484]: I0312 21:07:00.414674 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:00.414794 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:00.414794 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:00.414794 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:00.414794 master-0 kubenswrapper[7484]: I0312 21:07:00.414771 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:01.414680 master-0 kubenswrapper[7484]: I0312 21:07:01.414583 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:01.414680 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:01.414680 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:01.414680 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:01.418186 master-0 kubenswrapper[7484]: I0312 21:07:01.414689 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:02.414025 master-0 kubenswrapper[7484]: I0312 21:07:02.413907 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:02.414025 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:02.414025 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:02.414025 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:02.414025 master-0 kubenswrapper[7484]: I0312 21:07:02.414016 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:02.734283 master-0 kubenswrapper[7484]: I0312 21:07:02.734056 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:07:02.735192 master-0 kubenswrapper[7484]: E0312 21:07:02.734531 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-qpf68_openshift-ingress-operator(2b71f537-1cc2-4645-8e50-23941635457c)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" podUID="2b71f537-1cc2-4645-8e50-23941635457c" Mar 12 21:07:03.413608 master-0 kubenswrapper[7484]: I0312 21:07:03.413550 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:03.413608 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:03.413608 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:03.413608 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:03.413916 master-0 kubenswrapper[7484]: I0312 21:07:03.413642 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:04.414060 master-0 kubenswrapper[7484]: I0312 21:07:04.414001 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:04.414060 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:04.414060 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:04.414060 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:04.415233 master-0 kubenswrapper[7484]: I0312 21:07:04.414971 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:05.415123 master-0 kubenswrapper[7484]: I0312 21:07:05.415023 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:05.415123 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:05.415123 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:05.415123 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:05.416165 master-0 kubenswrapper[7484]: I0312 21:07:05.415133 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:05.734924 master-0 kubenswrapper[7484]: I0312 21:07:05.734731 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:07:05.735128 master-0 kubenswrapper[7484]: E0312 21:07:05.734986 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:07:06.414346 master-0 kubenswrapper[7484]: I0312 21:07:06.414290 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:06.414346 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:06.414346 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:06.414346 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:06.414794 master-0 kubenswrapper[7484]: I0312 21:07:06.414352 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:07.413930 master-0 kubenswrapper[7484]: I0312 21:07:07.413862 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:07.413930 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:07.413930 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:07.413930 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:07.414478 master-0 kubenswrapper[7484]: I0312 21:07:07.413961 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:08.413932 master-0 kubenswrapper[7484]: I0312 21:07:08.413789 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:08.413932 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:08.413932 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:08.413932 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:08.413932 master-0 kubenswrapper[7484]: I0312 21:07:08.413931 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:09.414078 master-0 kubenswrapper[7484]: I0312 21:07:09.413998 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:09.414078 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:09.414078 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:09.414078 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:09.415776 master-0 kubenswrapper[7484]: I0312 21:07:09.414100 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:10.414284 master-0 kubenswrapper[7484]: I0312 21:07:10.414219 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:10.414284 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:10.414284 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:10.414284 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:10.415801 master-0 kubenswrapper[7484]: I0312 21:07:10.414960 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:10.734311 master-0 kubenswrapper[7484]: I0312 21:07:10.734141 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:07:10.734575 master-0 kubenswrapper[7484]: E0312 21:07:10.734530 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-8fk8w_openshift-cluster-storage-operator(d4a162d4-8086-4bcf-854d-7e6cd37fd4c7)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" podUID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" Mar 12 21:07:11.413625 master-0 kubenswrapper[7484]: I0312 21:07:11.413561 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:11.413625 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:11.413625 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:11.413625 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:11.413987 master-0 kubenswrapper[7484]: I0312 21:07:11.413645 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:12.414617 master-0 kubenswrapper[7484]: I0312 21:07:12.414525 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:12.414617 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:12.414617 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:12.414617 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:12.415631 master-0 kubenswrapper[7484]: I0312 21:07:12.414641 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:13.419677 master-0 kubenswrapper[7484]: I0312 21:07:13.419579 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:13.419677 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:13.419677 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:13.419677 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:13.420922 master-0 kubenswrapper[7484]: I0312 21:07:13.419715 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:14.414544 master-0 kubenswrapper[7484]: I0312 21:07:14.414456 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:14.414544 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:14.414544 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:14.414544 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:14.415048 master-0 kubenswrapper[7484]: I0312 21:07:14.414544 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:14.733710 master-0 kubenswrapper[7484]: I0312 21:07:14.733565 7484 scope.go:117] "RemoveContainer" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" Mar 12 21:07:15.414238 master-0 kubenswrapper[7484]: I0312 21:07:15.414138 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:15.414238 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:15.414238 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:15.414238 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:15.414562 master-0 kubenswrapper[7484]: I0312 21:07:15.414270 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:15.581720 master-0 kubenswrapper[7484]: I0312 21:07:15.581671 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/4.log" Mar 12 21:07:15.582182 master-0 kubenswrapper[7484]: I0312 21:07:15.582141 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" event={"ID":"2b71f537-1cc2-4645-8e50-23941635457c","Type":"ContainerStarted","Data":"0945d83b7b4ec1d53379a04e921a00dfba9574dfef02f9870694a86c12d28e6c"} Mar 12 21:07:16.414605 master-0 kubenswrapper[7484]: I0312 21:07:16.414496 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:16.414605 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:16.414605 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:16.414605 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:16.415878 master-0 kubenswrapper[7484]: I0312 21:07:16.414604 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:17.413760 master-0 kubenswrapper[7484]: I0312 21:07:17.413670 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:17.413760 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:17.413760 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:17.413760 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:17.414241 master-0 kubenswrapper[7484]: I0312 21:07:17.413786 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:18.413673 master-0 kubenswrapper[7484]: I0312 21:07:18.413604 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:18.413673 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:18.413673 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:18.413673 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:18.414279 master-0 kubenswrapper[7484]: I0312 21:07:18.413700 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:18.745929 master-0 kubenswrapper[7484]: I0312 21:07:18.745492 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:07:18.746166 master-0 kubenswrapper[7484]: E0312 21:07:18.745982 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:07:19.414441 master-0 kubenswrapper[7484]: I0312 21:07:19.414348 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:19.414441 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:19.414441 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:19.414441 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:19.415499 master-0 kubenswrapper[7484]: I0312 21:07:19.414442 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:20.414921 master-0 kubenswrapper[7484]: I0312 21:07:20.414847 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:20.414921 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:20.414921 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:20.414921 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:20.415599 master-0 kubenswrapper[7484]: I0312 21:07:20.414939 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:21.414962 master-0 kubenswrapper[7484]: I0312 21:07:21.414879 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:21.414962 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:21.414962 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:21.414962 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:21.415988 master-0 kubenswrapper[7484]: I0312 21:07:21.414989 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:22.414376 master-0 kubenswrapper[7484]: I0312 21:07:22.414266 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:22.414376 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:22.414376 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:22.414376 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:22.415937 master-0 kubenswrapper[7484]: I0312 21:07:22.414403 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:23.413703 master-0 kubenswrapper[7484]: I0312 21:07:23.413633 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:23.413703 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:23.413703 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:23.413703 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:23.414496 master-0 kubenswrapper[7484]: I0312 21:07:23.414447 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:24.413628 master-0 kubenswrapper[7484]: I0312 21:07:24.413535 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:24.413628 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:24.413628 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:24.413628 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:24.414254 master-0 kubenswrapper[7484]: I0312 21:07:24.413671 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:25.415504 master-0 kubenswrapper[7484]: I0312 21:07:25.415390 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:25.415504 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:25.415504 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:25.415504 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:25.416468 master-0 kubenswrapper[7484]: I0312 21:07:25.415557 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:25.734490 master-0 kubenswrapper[7484]: I0312 21:07:25.734334 7484 scope.go:117] "RemoveContainer" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" Mar 12 21:07:26.414782 master-0 kubenswrapper[7484]: I0312 21:07:26.414681 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:26.414782 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:26.414782 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:26.414782 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:26.415300 master-0 kubenswrapper[7484]: I0312 21:07:26.414800 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:26.673374 master-0 kubenswrapper[7484]: I0312 21:07:26.673219 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/4.log" Mar 12 21:07:26.673374 master-0 kubenswrapper[7484]: I0312 21:07:26.673288 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" event={"ID":"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7","Type":"ContainerStarted","Data":"87c73145f292a7af3c504057b34ef560bad95cf33899eb0355e0216d8eec9fe3"} Mar 12 21:07:27.414339 master-0 kubenswrapper[7484]: I0312 21:07:27.414241 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:27.414339 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:27.414339 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:27.414339 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:27.414339 master-0 kubenswrapper[7484]: I0312 21:07:27.414333 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:28.414051 master-0 kubenswrapper[7484]: I0312 21:07:28.413972 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:28.414051 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:28.414051 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:28.414051 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:28.415212 master-0 kubenswrapper[7484]: I0312 21:07:28.414069 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:29.414691 master-0 kubenswrapper[7484]: I0312 21:07:29.414579 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:29.414691 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:29.414691 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:29.414691 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:29.415957 master-0 kubenswrapper[7484]: I0312 21:07:29.414699 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:30.414244 master-0 kubenswrapper[7484]: I0312 21:07:30.414154 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:30.414244 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:30.414244 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:30.414244 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:30.414722 master-0 kubenswrapper[7484]: I0312 21:07:30.414253 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:31.414457 master-0 kubenswrapper[7484]: I0312 21:07:31.414356 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:31.414457 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:31.414457 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:31.414457 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:31.414457 master-0 kubenswrapper[7484]: I0312 21:07:31.414444 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:32.414051 master-0 kubenswrapper[7484]: I0312 21:07:32.413942 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:07:32.414051 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:07:32.414051 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:07:32.414051 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:07:32.414051 master-0 kubenswrapper[7484]: I0312 21:07:32.414035 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:07:32.414560 master-0 kubenswrapper[7484]: I0312 21:07:32.414116 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:07:32.415249 master-0 kubenswrapper[7484]: I0312 21:07:32.414974 7484 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"e2916ee608198e843f503ac1b99774e97d332ea70158688e35693b97b4ee8573"} pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" containerMessage="Container router failed startup probe, will be restarted" Mar 12 21:07:32.415249 master-0 kubenswrapper[7484]: I0312 21:07:32.415037 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" containerID="cri-o://e2916ee608198e843f503ac1b99774e97d332ea70158688e35693b97b4ee8573" gracePeriod=3600 Mar 12 21:07:33.734977 master-0 kubenswrapper[7484]: I0312 21:07:33.734914 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:07:33.735953 master-0 kubenswrapper[7484]: E0312 21:07:33.735247 7484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:07:36.020396 master-0 kubenswrapper[7484]: I0312 21:07:36.020339 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 12 21:07:36.020888 master-0 kubenswrapper[7484]: E0312 21:07:36.020681 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e5a97-fb81-4609-8538-c55a8e2db411" containerName="installer" Mar 12 21:07:36.020888 master-0 kubenswrapper[7484]: I0312 21:07:36.020697 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e5a97-fb81-4609-8538-c55a8e2db411" containerName="installer" Mar 12 21:07:36.020888 master-0 kubenswrapper[7484]: I0312 21:07:36.020866 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e5a97-fb81-4609-8538-c55a8e2db411" containerName="installer" Mar 12 21:07:36.026052 master-0 kubenswrapper[7484]: I0312 21:07:36.025987 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.028728 master-0 kubenswrapper[7484]: I0312 21:07:36.028691 7484 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 21:07:36.035414 master-0 kubenswrapper[7484]: I0312 21:07:36.035369 7484 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-v74cb" Mar 12 21:07:36.040712 master-0 kubenswrapper[7484]: I0312 21:07:36.040675 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 12 21:07:36.170304 master-0 kubenswrapper[7484]: I0312 21:07:36.170186 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.170521 master-0 kubenswrapper[7484]: I0312 21:07:36.170414 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.170668 master-0 kubenswrapper[7484]: I0312 21:07:36.170613 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.271615 master-0 kubenswrapper[7484]: I0312 21:07:36.271459 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.271615 master-0 kubenswrapper[7484]: I0312 21:07:36.271533 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.271615 master-0 kubenswrapper[7484]: I0312 21:07:36.271579 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.271990 master-0 kubenswrapper[7484]: I0312 21:07:36.271673 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.271990 master-0 kubenswrapper[7484]: I0312 21:07:36.271718 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.294926 master-0 kubenswrapper[7484]: I0312 21:07:36.294863 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.394716 master-0 kubenswrapper[7484]: I0312 21:07:36.394607 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:07:36.942264 master-0 kubenswrapper[7484]: I0312 21:07:36.942193 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 12 21:07:36.950231 master-0 kubenswrapper[7484]: W0312 21:07:36.950133 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf4b03064_f24f_4c9f_94c4_9c9511cc5bb3.slice/crio-2bda4bdbafeda5c16522bec1d5e271b94c699e773fb97f90f5231952198aba02 WatchSource:0}: Error finding container 2bda4bdbafeda5c16522bec1d5e271b94c699e773fb97f90f5231952198aba02: Status 404 returned error can't find the container with id 2bda4bdbafeda5c16522bec1d5e271b94c699e773fb97f90f5231952198aba02 Mar 12 21:07:37.761547 master-0 kubenswrapper[7484]: I0312 21:07:37.761484 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3","Type":"ContainerStarted","Data":"ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594"} Mar 12 21:07:37.761547 master-0 kubenswrapper[7484]: I0312 21:07:37.761545 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3","Type":"ContainerStarted","Data":"2bda4bdbafeda5c16522bec1d5e271b94c699e773fb97f90f5231952198aba02"} Mar 12 21:07:37.809896 master-0 kubenswrapper[7484]: I0312 21:07:37.809747 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.80972225 podStartE2EDuration="1.80972225s" podCreationTimestamp="2026-03-12 21:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:07:37.801379339 +0000 UTC m=+1070.286648161" watchObservedRunningTime="2026-03-12 21:07:37.80972225 +0000 UTC m=+1070.294991062" Mar 12 21:07:42.227107 master-0 kubenswrapper[7484]: I0312 21:07:42.227063 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 12 21:07:42.227994 master-0 kubenswrapper[7484]: I0312 21:07:42.227963 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" containerName="installer" containerID="cri-o://ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594" gracePeriod=30 Mar 12 21:07:45.742729 master-0 kubenswrapper[7484]: I0312 21:07:45.734072 7484 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:07:46.811094 master-0 kubenswrapper[7484]: I0312 21:07:46.811034 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:07:46.814839 master-0 kubenswrapper[7484]: I0312 21:07:46.812328 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"b626b2974550fdcabce6b08a32cc3b1da47078dee2fd1671f52a14cd3557b052"} Mar 12 21:07:46.826406 master-0 kubenswrapper[7484]: I0312 21:07:46.825716 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 21:07:46.827115 master-0 kubenswrapper[7484]: I0312 21:07:46.827077 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:46.842504 master-0 kubenswrapper[7484]: I0312 21:07:46.842427 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 21:07:47.018437 master-0 kubenswrapper[7484]: I0312 21:07:47.018340 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-var-lock\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.018437 master-0 kubenswrapper[7484]: I0312 21:07:47.018426 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3efb85a2-ccc4-4ea5-825f-77c87b159570-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.018885 master-0 kubenswrapper[7484]: I0312 21:07:47.018464 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.120393 master-0 kubenswrapper[7484]: I0312 21:07:47.120289 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-var-lock\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.120653 master-0 kubenswrapper[7484]: I0312 21:07:47.120422 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-var-lock\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.120653 master-0 kubenswrapper[7484]: I0312 21:07:47.120494 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3efb85a2-ccc4-4ea5-825f-77c87b159570-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.120831 master-0 kubenswrapper[7484]: I0312 21:07:47.120688 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.120976 master-0 kubenswrapper[7484]: I0312 21:07:47.120901 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.141276 master-0 kubenswrapper[7484]: I0312 21:07:47.141196 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3efb85a2-ccc4-4ea5-825f-77c87b159570-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.150294 master-0 kubenswrapper[7484]: I0312 21:07:47.150169 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:07:47.664072 master-0 kubenswrapper[7484]: I0312 21:07:47.664015 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 21:07:47.821891 master-0 kubenswrapper[7484]: I0312 21:07:47.821740 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3efb85a2-ccc4-4ea5-825f-77c87b159570","Type":"ContainerStarted","Data":"ef7c840ed085e9e893dbf47bd5566d4be5a4188fd734337f1faa37ede7e21449"} Mar 12 21:07:48.830676 master-0 kubenswrapper[7484]: I0312 21:07:48.830582 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3efb85a2-ccc4-4ea5-825f-77c87b159570","Type":"ContainerStarted","Data":"d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60"} Mar 12 21:07:48.859447 master-0 kubenswrapper[7484]: I0312 21:07:48.859325 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.859294294 podStartE2EDuration="2.859294294s" podCreationTimestamp="2026-03-12 21:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:07:48.853562016 +0000 UTC m=+1081.338830838" watchObservedRunningTime="2026-03-12 21:07:48.859294294 +0000 UTC m=+1081.344563136" Mar 12 21:07:49.771303 master-0 kubenswrapper[7484]: I0312 21:07:49.771220 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:07:49.771303 master-0 kubenswrapper[7484]: I0312 21:07:49.771279 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:07:49.781636 master-0 kubenswrapper[7484]: I0312 21:07:49.781594 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:07:59.782336 master-0 kubenswrapper[7484]: I0312 21:07:59.782248 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:08:01.823741 master-0 kubenswrapper[7484]: I0312 21:08:01.823681 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 21:08:01.824776 master-0 kubenswrapper[7484]: I0312 21:08:01.824713 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="3efb85a2-ccc4-4ea5-825f-77c87b159570" containerName="installer" containerID="cri-o://d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60" gracePeriod=30 Mar 12 21:08:02.316860 master-0 kubenswrapper[7484]: I0312 21:08:02.316746 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_3efb85a2-ccc4-4ea5-825f-77c87b159570/installer/0.log" Mar 12 21:08:02.317115 master-0 kubenswrapper[7484]: I0312 21:08:02.316888 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:08:02.486000 master-0 kubenswrapper[7484]: I0312 21:08:02.485867 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3efb85a2-ccc4-4ea5-825f-77c87b159570-kube-api-access\") pod \"3efb85a2-ccc4-4ea5-825f-77c87b159570\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " Mar 12 21:08:02.486432 master-0 kubenswrapper[7484]: I0312 21:08:02.486405 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-var-lock\") pod \"3efb85a2-ccc4-4ea5-825f-77c87b159570\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " Mar 12 21:08:02.486620 master-0 kubenswrapper[7484]: I0312 21:08:02.486511 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-var-lock" (OuterVolumeSpecName: "var-lock") pod "3efb85a2-ccc4-4ea5-825f-77c87b159570" (UID: "3efb85a2-ccc4-4ea5-825f-77c87b159570"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:02.486771 master-0 kubenswrapper[7484]: I0312 21:08:02.486746 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-kubelet-dir\") pod \"3efb85a2-ccc4-4ea5-825f-77c87b159570\" (UID: \"3efb85a2-ccc4-4ea5-825f-77c87b159570\") " Mar 12 21:08:02.487105 master-0 kubenswrapper[7484]: I0312 21:08:02.486750 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3efb85a2-ccc4-4ea5-825f-77c87b159570" (UID: "3efb85a2-ccc4-4ea5-825f-77c87b159570"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:02.487436 master-0 kubenswrapper[7484]: I0312 21:08:02.487408 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:02.487575 master-0 kubenswrapper[7484]: I0312 21:08:02.487552 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3efb85a2-ccc4-4ea5-825f-77c87b159570-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:02.490364 master-0 kubenswrapper[7484]: I0312 21:08:02.490310 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efb85a2-ccc4-4ea5-825f-77c87b159570-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3efb85a2-ccc4-4ea5-825f-77c87b159570" (UID: "3efb85a2-ccc4-4ea5-825f-77c87b159570"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:08:02.588656 master-0 kubenswrapper[7484]: I0312 21:08:02.588616 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3efb85a2-ccc4-4ea5-825f-77c87b159570-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:02.947077 master-0 kubenswrapper[7484]: I0312 21:08:02.947012 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_3efb85a2-ccc4-4ea5-825f-77c87b159570/installer/0.log" Mar 12 21:08:02.947888 master-0 kubenswrapper[7484]: I0312 21:08:02.947092 7484 generic.go:334] "Generic (PLEG): container finished" podID="3efb85a2-ccc4-4ea5-825f-77c87b159570" containerID="d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60" exitCode=1 Mar 12 21:08:02.947888 master-0 kubenswrapper[7484]: I0312 21:08:02.947145 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3efb85a2-ccc4-4ea5-825f-77c87b159570","Type":"ContainerDied","Data":"d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60"} Mar 12 21:08:02.947888 master-0 kubenswrapper[7484]: I0312 21:08:02.947212 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3efb85a2-ccc4-4ea5-825f-77c87b159570","Type":"ContainerDied","Data":"ef7c840ed085e9e893dbf47bd5566d4be5a4188fd734337f1faa37ede7e21449"} Mar 12 21:08:02.947888 master-0 kubenswrapper[7484]: I0312 21:08:02.947255 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 12 21:08:02.947888 master-0 kubenswrapper[7484]: I0312 21:08:02.947268 7484 scope.go:117] "RemoveContainer" containerID="d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60" Mar 12 21:08:02.976025 master-0 kubenswrapper[7484]: I0312 21:08:02.975967 7484 scope.go:117] "RemoveContainer" containerID="d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60" Mar 12 21:08:02.976782 master-0 kubenswrapper[7484]: E0312 21:08:02.976714 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60\": container with ID starting with d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60 not found: ID does not exist" containerID="d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60" Mar 12 21:08:02.977048 master-0 kubenswrapper[7484]: I0312 21:08:02.976792 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60"} err="failed to get container status \"d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60\": rpc error: code = NotFound desc = could not find container \"d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60\": container with ID starting with d10b4d33d8806216f2bc9d1e4e1c43e227650ef29b7d4be29ada86ef65327f60 not found: ID does not exist" Mar 12 21:08:03.008851 master-0 kubenswrapper[7484]: I0312 21:08:03.005931 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 21:08:03.012573 master-0 kubenswrapper[7484]: I0312 21:08:03.012507 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 12 21:08:03.752741 master-0 kubenswrapper[7484]: I0312 21:08:03.752652 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efb85a2-ccc4-4ea5-825f-77c87b159570" path="/var/lib/kubelet/pods/3efb85a2-ccc4-4ea5-825f-77c87b159570/volumes" Mar 12 21:08:06.031663 master-0 kubenswrapper[7484]: I0312 21:08:06.028365 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 21:08:06.031663 master-0 kubenswrapper[7484]: E0312 21:08:06.028711 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efb85a2-ccc4-4ea5-825f-77c87b159570" containerName="installer" Mar 12 21:08:06.031663 master-0 kubenswrapper[7484]: I0312 21:08:06.028727 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efb85a2-ccc4-4ea5-825f-77c87b159570" containerName="installer" Mar 12 21:08:06.031663 master-0 kubenswrapper[7484]: I0312 21:08:06.028942 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efb85a2-ccc4-4ea5-825f-77c87b159570" containerName="installer" Mar 12 21:08:06.036520 master-0 kubenswrapper[7484]: I0312 21:08:06.036445 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 21:08:06.038147 master-0 kubenswrapper[7484]: I0312 21:08:06.038104 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.044915 master-0 kubenswrapper[7484]: I0312 21:08:06.042428 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.044915 master-0 kubenswrapper[7484]: I0312 21:08:06.042488 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.044915 master-0 kubenswrapper[7484]: I0312 21:08:06.042601 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.143962 master-0 kubenswrapper[7484]: I0312 21:08:06.143860 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.144276 master-0 kubenswrapper[7484]: I0312 21:08:06.144006 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.144276 master-0 kubenswrapper[7484]: I0312 21:08:06.144055 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.144276 master-0 kubenswrapper[7484]: I0312 21:08:06.144082 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.144276 master-0 kubenswrapper[7484]: I0312 21:08:06.144161 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.177878 master-0 kubenswrapper[7484]: I0312 21:08:06.174115 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.364112 master-0 kubenswrapper[7484]: I0312 21:08:06.364040 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:06.852158 master-0 kubenswrapper[7484]: I0312 21:08:06.852052 7484 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 12 21:08:06.989323 master-0 kubenswrapper[7484]: I0312 21:08:06.989222 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"222b53b1-7e5c-49d5-9795-fec4d0547398","Type":"ContainerStarted","Data":"3cd4ab457c36b4a666cc4b9eccf84f6ef45f43cd01a0b7df77a1a58dcfa9aeee"} Mar 12 21:08:08.000343 master-0 kubenswrapper[7484]: I0312 21:08:08.000242 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"222b53b1-7e5c-49d5-9795-fec4d0547398","Type":"ContainerStarted","Data":"ab2ac0f8617112ac113b7f1e35ea96fef230316545e82d9bf694d881d7b9d213"} Mar 12 21:08:08.038170 master-0 kubenswrapper[7484]: I0312 21:08:08.038022 7484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.037990719 podStartE2EDuration="2.037990719s" podCreationTimestamp="2026-03-12 21:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:08:08.026178583 +0000 UTC m=+1100.511447475" watchObservedRunningTime="2026-03-12 21:08:08.037990719 +0000 UTC m=+1100.523259561" Mar 12 21:08:08.643241 master-0 kubenswrapper[7484]: E0312 21:08:08.643187 7484 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podf4b03064_f24f_4c9f_94c4_9c9511cc5bb3.slice/crio-ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podf4b03064_f24f_4c9f_94c4_9c9511cc5bb3.slice/crio-conmon-ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:08:08.929694 master-0 kubenswrapper[7484]: I0312 21:08:08.929508 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_f4b03064-f24f-4c9f-94c4-9c9511cc5bb3/installer/0.log" Mar 12 21:08:08.929879 master-0 kubenswrapper[7484]: I0312 21:08:08.929757 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:08:09.021310 master-0 kubenswrapper[7484]: I0312 21:08:09.021267 7484 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_f4b03064-f24f-4c9f-94c4-9c9511cc5bb3/installer/0.log" Mar 12 21:08:09.021877 master-0 kubenswrapper[7484]: I0312 21:08:09.021330 7484 generic.go:334] "Generic (PLEG): container finished" podID="f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" containerID="ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594" exitCode=1 Mar 12 21:08:09.021877 master-0 kubenswrapper[7484]: I0312 21:08:09.021399 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3","Type":"ContainerDied","Data":"ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594"} Mar 12 21:08:09.021877 master-0 kubenswrapper[7484]: I0312 21:08:09.021435 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 12 21:08:09.021877 master-0 kubenswrapper[7484]: I0312 21:08:09.021463 7484 scope.go:117] "RemoveContainer" containerID="ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594" Mar 12 21:08:09.021877 master-0 kubenswrapper[7484]: I0312 21:08:09.021450 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3","Type":"ContainerDied","Data":"2bda4bdbafeda5c16522bec1d5e271b94c699e773fb97f90f5231952198aba02"} Mar 12 21:08:09.039990 master-0 kubenswrapper[7484]: I0312 21:08:09.037528 7484 scope.go:117] "RemoveContainer" containerID="ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594" Mar 12 21:08:09.039990 master-0 kubenswrapper[7484]: E0312 21:08:09.037975 7484 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594\": container with ID starting with ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594 not found: ID does not exist" containerID="ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594" Mar 12 21:08:09.039990 master-0 kubenswrapper[7484]: I0312 21:08:09.038010 7484 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594"} err="failed to get container status \"ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594\": rpc error: code = NotFound desc = could not find container \"ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594\": container with ID starting with ae108b50148c68609153f6feca0144eb4b98ae1e4db38a6ba5c90e164073f594 not found: ID does not exist" Mar 12 21:08:09.095317 master-0 kubenswrapper[7484]: I0312 21:08:09.095260 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kube-api-access\") pod \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " Mar 12 21:08:09.095507 master-0 kubenswrapper[7484]: I0312 21:08:09.095373 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kubelet-dir\") pod \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " Mar 12 21:08:09.095507 master-0 kubenswrapper[7484]: I0312 21:08:09.095439 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-var-lock\") pod \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\" (UID: \"f4b03064-f24f-4c9f-94c4-9c9511cc5bb3\") " Mar 12 21:08:09.095603 master-0 kubenswrapper[7484]: I0312 21:08:09.095525 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" (UID: "f4b03064-f24f-4c9f-94c4-9c9511cc5bb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:09.095696 master-0 kubenswrapper[7484]: I0312 21:08:09.095646 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-var-lock" (OuterVolumeSpecName: "var-lock") pod "f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" (UID: "f4b03064-f24f-4c9f-94c4-9c9511cc5bb3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:09.096117 master-0 kubenswrapper[7484]: I0312 21:08:09.096080 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:09.096117 master-0 kubenswrapper[7484]: I0312 21:08:09.096106 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:09.099496 master-0 kubenswrapper[7484]: I0312 21:08:09.099460 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" (UID: "f4b03064-f24f-4c9f-94c4-9c9511cc5bb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:08:09.197114 master-0 kubenswrapper[7484]: I0312 21:08:09.197050 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:09.384766 master-0 kubenswrapper[7484]: I0312 21:08:09.384600 7484 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 12 21:08:09.391134 master-0 kubenswrapper[7484]: I0312 21:08:09.391051 7484 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 12 21:08:09.748105 master-0 kubenswrapper[7484]: I0312 21:08:09.747946 7484 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" path="/var/lib/kubelet/pods/f4b03064-f24f-4c9f-94c4-9c9511cc5bb3/volumes" Mar 12 21:08:19.123942 master-0 kubenswrapper[7484]: I0312 21:08:19.123668 7484 generic.go:334] "Generic (PLEG): container finished" podID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerID="e2916ee608198e843f503ac1b99774e97d332ea70158688e35693b97b4ee8573" exitCode=0 Mar 12 21:08:19.124787 master-0 kubenswrapper[7484]: I0312 21:08:19.123780 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerDied","Data":"e2916ee608198e843f503ac1b99774e97d332ea70158688e35693b97b4ee8573"} Mar 12 21:08:19.124787 master-0 kubenswrapper[7484]: I0312 21:08:19.124120 7484 scope.go:117] "RemoveContainer" containerID="91d2028136276069b3430f01cdedfd621a7ff241728670fbdc4cdf16424e1832" Mar 12 21:08:19.124787 master-0 kubenswrapper[7484]: I0312 21:08:19.124136 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" event={"ID":"a3828a1d-8180-4c7b-b423-4488f7fc0b76","Type":"ContainerStarted","Data":"ae5bc211a2f167ebe68b7da6282898aa121ad62c39b8b8eeb6d5eb9a37b80910"} Mar 12 21:08:19.411955 master-0 kubenswrapper[7484]: I0312 21:08:19.411748 7484 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:08:19.415066 master-0 kubenswrapper[7484]: I0312 21:08:19.415017 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:19.415066 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:19.415066 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:19.415066 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:19.415487 master-0 kubenswrapper[7484]: I0312 21:08:19.415442 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:20.411442 master-0 kubenswrapper[7484]: I0312 21:08:20.411334 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:08:20.414389 master-0 kubenswrapper[7484]: I0312 21:08:20.414310 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:20.414389 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:20.414389 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:20.414389 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:20.414786 master-0 kubenswrapper[7484]: I0312 21:08:20.414458 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:21.414401 master-0 kubenswrapper[7484]: I0312 21:08:21.414303 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:21.414401 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:21.414401 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:21.414401 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:21.415392 master-0 kubenswrapper[7484]: I0312 21:08:21.414411 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:22.414375 master-0 kubenswrapper[7484]: I0312 21:08:22.414281 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:22.414375 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:22.414375 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:22.414375 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:22.415443 master-0 kubenswrapper[7484]: I0312 21:08:22.414384 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:23.414595 master-0 kubenswrapper[7484]: I0312 21:08:23.414498 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:23.414595 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:23.414595 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:23.414595 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:23.415591 master-0 kubenswrapper[7484]: I0312 21:08:23.414624 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:24.414426 master-0 kubenswrapper[7484]: I0312 21:08:24.414324 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:24.414426 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:24.414426 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:24.414426 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:24.415622 master-0 kubenswrapper[7484]: I0312 21:08:24.414438 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:25.414955 master-0 kubenswrapper[7484]: I0312 21:08:25.414881 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:25.414955 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:25.414955 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:25.414955 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:25.415950 master-0 kubenswrapper[7484]: I0312 21:08:25.414979 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:26.415101 master-0 kubenswrapper[7484]: I0312 21:08:26.415000 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:26.415101 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:26.415101 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:26.415101 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:26.416159 master-0 kubenswrapper[7484]: I0312 21:08:26.415115 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:27.414039 master-0 kubenswrapper[7484]: I0312 21:08:27.413919 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:27.414039 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:27.414039 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:27.414039 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:27.414571 master-0 kubenswrapper[7484]: I0312 21:08:27.414090 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:28.414317 master-0 kubenswrapper[7484]: I0312 21:08:28.414203 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:28.414317 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:28.414317 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:28.414317 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:28.414317 master-0 kubenswrapper[7484]: I0312 21:08:28.414306 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:29.414187 master-0 kubenswrapper[7484]: I0312 21:08:29.414086 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:29.414187 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:29.414187 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:29.414187 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:29.415257 master-0 kubenswrapper[7484]: I0312 21:08:29.414214 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:30.414571 master-0 kubenswrapper[7484]: I0312 21:08:30.414476 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:30.414571 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:30.414571 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:30.414571 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:30.415782 master-0 kubenswrapper[7484]: I0312 21:08:30.414573 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:31.414648 master-0 kubenswrapper[7484]: I0312 21:08:31.414525 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:31.414648 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:31.414648 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:31.414648 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:31.415635 master-0 kubenswrapper[7484]: I0312 21:08:31.414649 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:32.414351 master-0 kubenswrapper[7484]: I0312 21:08:32.414244 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:32.414351 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:32.414351 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:32.414351 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:32.415578 master-0 kubenswrapper[7484]: I0312 21:08:32.414343 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:33.414623 master-0 kubenswrapper[7484]: I0312 21:08:33.414508 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:33.414623 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:33.414623 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:33.414623 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:33.415614 master-0 kubenswrapper[7484]: I0312 21:08:33.414616 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:34.414770 master-0 kubenswrapper[7484]: I0312 21:08:34.414659 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:34.414770 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:34.414770 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:34.414770 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:34.414770 master-0 kubenswrapper[7484]: I0312 21:08:34.414742 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:35.414989 master-0 kubenswrapper[7484]: I0312 21:08:35.414892 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:35.414989 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:35.414989 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:35.414989 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:35.416166 master-0 kubenswrapper[7484]: I0312 21:08:35.414991 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:36.414766 master-0 kubenswrapper[7484]: I0312 21:08:36.414691 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:36.414766 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:36.414766 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:36.414766 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:36.416233 master-0 kubenswrapper[7484]: I0312 21:08:36.414785 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:37.414325 master-0 kubenswrapper[7484]: I0312 21:08:37.414218 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:37.414325 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:37.414325 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:37.414325 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:37.414924 master-0 kubenswrapper[7484]: I0312 21:08:37.414332 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:38.414227 master-0 kubenswrapper[7484]: I0312 21:08:38.414138 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:38.414227 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:38.414227 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:38.414227 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:38.415246 master-0 kubenswrapper[7484]: I0312 21:08:38.414237 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:39.414544 master-0 kubenswrapper[7484]: I0312 21:08:39.414453 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:39.414544 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:39.414544 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:39.414544 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:39.415581 master-0 kubenswrapper[7484]: I0312 21:08:39.414562 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:40.414482 master-0 kubenswrapper[7484]: I0312 21:08:40.414379 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:40.414482 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:40.414482 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:40.414482 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:40.415457 master-0 kubenswrapper[7484]: I0312 21:08:40.414491 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:41.414642 master-0 kubenswrapper[7484]: I0312 21:08:41.414528 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:41.414642 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:41.414642 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:41.414642 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:41.415701 master-0 kubenswrapper[7484]: I0312 21:08:41.414648 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:42.415592 master-0 kubenswrapper[7484]: I0312 21:08:42.415466 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:42.415592 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:42.415592 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:42.415592 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:42.416789 master-0 kubenswrapper[7484]: I0312 21:08:42.415615 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:43.414603 master-0 kubenswrapper[7484]: I0312 21:08:43.414479 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:43.414603 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:43.414603 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:43.414603 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:43.414603 master-0 kubenswrapper[7484]: I0312 21:08:43.414584 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:44.415137 master-0 kubenswrapper[7484]: I0312 21:08:44.415012 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:44.415137 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:44.415137 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:44.415137 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:44.415137 master-0 kubenswrapper[7484]: I0312 21:08:44.415124 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:45.415313 master-0 kubenswrapper[7484]: I0312 21:08:45.415134 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:45.415313 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:45.415313 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:45.415313 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:45.415313 master-0 kubenswrapper[7484]: I0312 21:08:45.415283 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:46.414946 master-0 kubenswrapper[7484]: I0312 21:08:46.414884 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:46.414946 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:46.414946 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:46.414946 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:46.415362 master-0 kubenswrapper[7484]: I0312 21:08:46.414970 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:47.414952 master-0 kubenswrapper[7484]: I0312 21:08:47.414849 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:47.414952 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:47.414952 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:47.414952 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:47.416133 master-0 kubenswrapper[7484]: I0312 21:08:47.414990 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:48.414424 master-0 kubenswrapper[7484]: I0312 21:08:48.414340 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:48.414424 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:48.414424 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:48.414424 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:48.414994 master-0 kubenswrapper[7484]: I0312 21:08:48.414432 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:49.415173 master-0 kubenswrapper[7484]: I0312 21:08:49.415064 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:49.415173 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:49.415173 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:49.415173 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:49.416232 master-0 kubenswrapper[7484]: I0312 21:08:49.415195 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:50.414635 master-0 kubenswrapper[7484]: I0312 21:08:50.414536 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:50.414635 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:50.414635 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:50.414635 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:50.415274 master-0 kubenswrapper[7484]: I0312 21:08:50.414635 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:51.414490 master-0 kubenswrapper[7484]: I0312 21:08:51.414358 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:51.414490 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:51.414490 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:51.414490 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:51.415041 master-0 kubenswrapper[7484]: I0312 21:08:51.414490 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:52.414475 master-0 kubenswrapper[7484]: I0312 21:08:52.414372 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:52.414475 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:52.414475 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:52.414475 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:52.415502 master-0 kubenswrapper[7484]: I0312 21:08:52.414474 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:53.414452 master-0 kubenswrapper[7484]: I0312 21:08:53.414333 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:53.414452 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:53.414452 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:53.414452 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:53.415684 master-0 kubenswrapper[7484]: I0312 21:08:53.414450 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:54.416470 master-0 kubenswrapper[7484]: I0312 21:08:54.416282 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:54.416470 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:54.416470 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:54.416470 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:54.417616 master-0 kubenswrapper[7484]: I0312 21:08:54.416533 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:55.336006 master-0 kubenswrapper[7484]: I0312 21:08:55.335933 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:08:55.336274 master-0 kubenswrapper[7484]: E0312 21:08:55.336245 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" containerName="installer" Mar 12 21:08:55.336316 master-0 kubenswrapper[7484]: I0312 21:08:55.336272 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" containerName="installer" Mar 12 21:08:55.336475 master-0 kubenswrapper[7484]: I0312 21:08:55.336451 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b03064-f24f-4c9f-94c4-9c9511cc5bb3" containerName="installer" Mar 12 21:08:55.336965 master-0 kubenswrapper[7484]: I0312 21:08:55.336932 7484 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 21:08:55.337157 master-0 kubenswrapper[7484]: I0312 21:08:55.337097 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.337318 master-0 kubenswrapper[7484]: I0312 21:08:55.337263 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://0c4f41c6272feddd07ae16e6e9ba5929d190e5949f49ce16a888e464f3277bb3" gracePeriod=15 Mar 12 21:08:55.337373 master-0 kubenswrapper[7484]: I0312 21:08:55.337307 7484 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://293b592a6aebbbbed58da86d9dee8f9df9bbf7c626aca82c95e65d3a571789d2" gracePeriod=15 Mar 12 21:08:55.338628 master-0 kubenswrapper[7484]: I0312 21:08:55.338568 7484 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:08:55.339001 master-0 kubenswrapper[7484]: E0312 21:08:55.338966 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 21:08:55.339001 master-0 kubenswrapper[7484]: I0312 21:08:55.338996 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 21:08:55.339082 master-0 kubenswrapper[7484]: E0312 21:08:55.339029 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 21:08:55.339082 master-0 kubenswrapper[7484]: I0312 21:08:55.339043 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 21:08:55.339176 master-0 kubenswrapper[7484]: E0312 21:08:55.339087 7484 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 21:08:55.339176 master-0 kubenswrapper[7484]: I0312 21:08:55.339103 7484 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 21:08:55.339325 master-0 kubenswrapper[7484]: I0312 21:08:55.339299 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 12 21:08:55.339368 master-0 kubenswrapper[7484]: I0312 21:08:55.339352 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 12 21:08:55.339426 master-0 kubenswrapper[7484]: I0312 21:08:55.339406 7484 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 12 21:08:55.344931 master-0 kubenswrapper[7484]: I0312 21:08:55.344879 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.394066 master-0 kubenswrapper[7484]: I0312 21:08:55.393699 7484 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:08:55.413485 master-0 kubenswrapper[7484]: E0312 21:08:55.413266 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.415864 master-0 kubenswrapper[7484]: I0312 21:08:55.414591 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:55.415864 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:55.415864 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:55.415864 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:55.415864 master-0 kubenswrapper[7484]: I0312 21:08:55.414645 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:55.458008 master-0 kubenswrapper[7484]: I0312 21:08:55.457960 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.458008 master-0 kubenswrapper[7484]: I0312 21:08:55.458005 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.458472 master-0 kubenswrapper[7484]: I0312 21:08:55.458107 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.458472 master-0 kubenswrapper[7484]: I0312 21:08:55.458213 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.458472 master-0 kubenswrapper[7484]: I0312 21:08:55.458256 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.458472 master-0 kubenswrapper[7484]: I0312 21:08:55.458339 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.458472 master-0 kubenswrapper[7484]: I0312 21:08:55.458362 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.458472 master-0 kubenswrapper[7484]: I0312 21:08:55.458389 7484 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.559563 master-0 kubenswrapper[7484]: I0312 21:08:55.559489 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559576 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559599 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559620 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559646 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559663 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559697 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.559752 master-0 kubenswrapper[7484]: I0312 21:08:55.559718 7484 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559784 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559839 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559864 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559886 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559905 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559927 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559945 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.560254 master-0 kubenswrapper[7484]: I0312 21:08:55.559966 7484 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.688741 master-0 kubenswrapper[7484]: I0312 21:08:55.688550 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:08:55.714054 master-0 kubenswrapper[7484]: I0312 21:08:55.714002 7484 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:55.716870 master-0 kubenswrapper[7484]: W0312 21:08:55.716838 7484 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899242a15b2bdf3b4a04fb323647ca94.slice/crio-873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6 WatchSource:0}: Error finding container 873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6: Status 404 returned error can't find the container with id 873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6 Mar 12 21:08:55.731851 master-0 kubenswrapper[7484]: E0312 21:08:55.731725 7484 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c343064bd951d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:899242a15b2bdf3b4a04fb323647ca94,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:08:55.730353437 +0000 UTC m=+1148.215622239,LastTimestamp:2026-03-12 21:08:55.730353437 +0000 UTC m=+1148.215622239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:08:56.414230 master-0 kubenswrapper[7484]: I0312 21:08:56.414165 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:56.414230 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:56.414230 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:56.414230 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:56.414641 master-0 kubenswrapper[7484]: I0312 21:08:56.414248 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:56.437840 master-0 kubenswrapper[7484]: I0312 21:08:56.437734 7484 generic.go:334] "Generic (PLEG): container finished" podID="222b53b1-7e5c-49d5-9795-fec4d0547398" containerID="ab2ac0f8617112ac113b7f1e35ea96fef230316545e82d9bf694d881d7b9d213" exitCode=0 Mar 12 21:08:56.438091 master-0 kubenswrapper[7484]: I0312 21:08:56.437879 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"222b53b1-7e5c-49d5-9795-fec4d0547398","Type":"ContainerDied","Data":"ab2ac0f8617112ac113b7f1e35ea96fef230316545e82d9bf694d881d7b9d213"} Mar 12 21:08:56.439358 master-0 kubenswrapper[7484]: I0312 21:08:56.439260 7484 status_manager.go:851] "Failed to get status for pod" podUID="222b53b1-7e5c-49d5-9795-fec4d0547398" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:08:56.440259 master-0 kubenswrapper[7484]: I0312 21:08:56.440198 7484 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:08:56.442424 master-0 kubenswrapper[7484]: I0312 21:08:56.442350 7484 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="293b592a6aebbbbed58da86d9dee8f9df9bbf7c626aca82c95e65d3a571789d2" exitCode=0 Mar 12 21:08:56.445665 master-0 kubenswrapper[7484]: I0312 21:08:56.445592 7484 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2" exitCode=0 Mar 12 21:08:56.445869 master-0 kubenswrapper[7484]: I0312 21:08:56.445716 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2"} Mar 12 21:08:56.445869 master-0 kubenswrapper[7484]: I0312 21:08:56.445838 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"305e45867f0f5c512d8dca3c39de15088c17eab90b2969aafd739643c4b112ce"} Mar 12 21:08:56.447146 master-0 kubenswrapper[7484]: E0312 21:08:56.447061 7484 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:56.447146 master-0 kubenswrapper[7484]: I0312 21:08:56.447114 7484 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:08:56.447913 master-0 kubenswrapper[7484]: I0312 21:08:56.447852 7484 status_manager.go:851] "Failed to get status for pod" podUID="222b53b1-7e5c-49d5-9795-fec4d0547398" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:08:56.450070 master-0 kubenswrapper[7484]: I0312 21:08:56.450026 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec"} Mar 12 21:08:56.450070 master-0 kubenswrapper[7484]: I0312 21:08:56.450060 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6"} Mar 12 21:08:56.452068 master-0 kubenswrapper[7484]: I0312 21:08:56.451984 7484 status_manager.go:851] "Failed to get status for pod" podUID="222b53b1-7e5c-49d5-9795-fec4d0547398" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:08:56.452961 master-0 kubenswrapper[7484]: I0312 21:08:56.452905 7484 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:08:57.447101 master-0 kubenswrapper[7484]: I0312 21:08:57.414728 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:57.447101 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:57.447101 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:57.447101 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:57.447101 master-0 kubenswrapper[7484]: I0312 21:08:57.414800 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:57.533023 master-0 kubenswrapper[7484]: I0312 21:08:57.532988 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e"} Mar 12 21:08:57.533229 master-0 kubenswrapper[7484]: I0312 21:08:57.533215 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50"} Mar 12 21:08:57.900504 master-0 kubenswrapper[7484]: I0312 21:08:57.900463 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:58.095245 master-0 kubenswrapper[7484]: I0312 21:08:58.094830 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"222b53b1-7e5c-49d5-9795-fec4d0547398\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " Mar 12 21:08:58.095245 master-0 kubenswrapper[7484]: I0312 21:08:58.094959 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"222b53b1-7e5c-49d5-9795-fec4d0547398\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " Mar 12 21:08:58.095245 master-0 kubenswrapper[7484]: I0312 21:08:58.094986 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"222b53b1-7e5c-49d5-9795-fec4d0547398\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " Mar 12 21:08:58.095245 master-0 kubenswrapper[7484]: I0312 21:08:58.095233 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock" (OuterVolumeSpecName: "var-lock") pod "222b53b1-7e5c-49d5-9795-fec4d0547398" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.095506 master-0 kubenswrapper[7484]: I0312 21:08:58.095268 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "222b53b1-7e5c-49d5-9795-fec4d0547398" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.100292 master-0 kubenswrapper[7484]: I0312 21:08:58.098109 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "222b53b1-7e5c-49d5-9795-fec4d0547398" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:08:58.179771 master-0 kubenswrapper[7484]: I0312 21:08:58.179732 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 21:08:58.196379 master-0 kubenswrapper[7484]: I0312 21:08:58.196161 7484 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.196379 master-0 kubenswrapper[7484]: I0312 21:08:58.196185 7484 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.196379 master-0 kubenswrapper[7484]: I0312 21:08:58.196196 7484 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.297047 master-0 kubenswrapper[7484]: I0312 21:08:58.296972 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 21:08:58.297288 master-0 kubenswrapper[7484]: I0312 21:08:58.297061 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 21:08:58.297288 master-0 kubenswrapper[7484]: I0312 21:08:58.297108 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 21:08:58.297288 master-0 kubenswrapper[7484]: I0312 21:08:58.297141 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 21:08:58.297288 master-0 kubenswrapper[7484]: I0312 21:08:58.297189 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 21:08:58.297288 master-0 kubenswrapper[7484]: I0312 21:08:58.297227 7484 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 12 21:08:58.297645 master-0 kubenswrapper[7484]: I0312 21:08:58.297521 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.297645 master-0 kubenswrapper[7484]: I0312 21:08:58.297563 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.297645 master-0 kubenswrapper[7484]: I0312 21:08:58.297584 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.297645 master-0 kubenswrapper[7484]: I0312 21:08:58.297578 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.298005 master-0 kubenswrapper[7484]: I0312 21:08:58.297656 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.298005 master-0 kubenswrapper[7484]: I0312 21:08:58.297758 7484 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:08:58.409909 master-0 kubenswrapper[7484]: I0312 21:08:58.401160 7484 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.409909 master-0 kubenswrapper[7484]: I0312 21:08:58.401235 7484 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.409909 master-0 kubenswrapper[7484]: I0312 21:08:58.401256 7484 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.409909 master-0 kubenswrapper[7484]: I0312 21:08:58.401278 7484 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.409909 master-0 kubenswrapper[7484]: I0312 21:08:58.401295 7484 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.409909 master-0 kubenswrapper[7484]: I0312 21:08:58.401313 7484 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:08:58.421851 master-0 kubenswrapper[7484]: I0312 21:08:58.420134 7484 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:08:58.421851 master-0 kubenswrapper[7484]: [-]has-synced failed: reason withheld Mar 12 21:08:58.421851 master-0 kubenswrapper[7484]: [+]process-running ok Mar 12 21:08:58.421851 master-0 kubenswrapper[7484]: healthz check failed Mar 12 21:08:58.421851 master-0 kubenswrapper[7484]: I0312 21:08:58.420190 7484 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:08:58.602612 master-0 kubenswrapper[7484]: I0312 21:08:58.602573 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"222b53b1-7e5c-49d5-9795-fec4d0547398","Type":"ContainerDied","Data":"3cd4ab457c36b4a666cc4b9eccf84f6ef45f43cd01a0b7df77a1a58dcfa9aeee"} Mar 12 21:08:58.607781 master-0 kubenswrapper[7484]: I0312 21:08:58.607749 7484 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd4ab457c36b4a666cc4b9eccf84f6ef45f43cd01a0b7df77a1a58dcfa9aeee" Mar 12 21:08:58.607973 master-0 kubenswrapper[7484]: I0312 21:08:58.603736 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:08:58.637138 master-0 kubenswrapper[7484]: I0312 21:08:58.637083 7484 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="0c4f41c6272feddd07ae16e6e9ba5929d190e5949f49ce16a888e464f3277bb3" exitCode=0 Mar 12 21:08:58.637313 master-0 kubenswrapper[7484]: I0312 21:08:58.637195 7484 scope.go:117] "RemoveContainer" containerID="293b592a6aebbbbed58da86d9dee8f9df9bbf7c626aca82c95e65d3a571789d2" Mar 12 21:08:58.637349 master-0 kubenswrapper[7484]: I0312 21:08:58.637335 7484 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 12 21:08:58.673294 master-0 kubenswrapper[7484]: I0312 21:08:58.673025 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"1867cbd1eea641a204f5d8db13d19bc48d06f54cf7a7cbc0d8d91fbb925b3a69"} Mar 12 21:08:58.673294 master-0 kubenswrapper[7484]: I0312 21:08:58.673073 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339"} Mar 12 21:08:58.673294 master-0 kubenswrapper[7484]: I0312 21:08:58.673083 7484 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447"} Mar 12 21:08:58.673510 master-0 kubenswrapper[7484]: I0312 21:08:58.673490 7484 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:08:58.698269 master-0 kubenswrapper[7484]: I0312 21:08:58.696589 7484 scope.go:117] "RemoveContainer" containerID="0c4f41c6272feddd07ae16e6e9ba5929d190e5949f49ce16a888e464f3277bb3" Mar 12 21:08:58.746017 master-0 kubenswrapper[7484]: I0312 21:08:58.745928 7484 scope.go:117] "RemoveContainer" containerID="30bcb0d2fdcb56e224f2a443567cf3f56d89a253adb3d5c2682e4fce2aac1458" Mar 12 21:08:58.789245 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 12 21:08:58.840244 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 21:08:58.840494 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 12 21:08:58.841416 master-0 systemd[1]: kubelet.service: Consumed 2min 57.451s CPU time. Mar 12 21:08:58.853445 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 12 21:08:59.026943 master-0 kubenswrapper[31456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 21:08:59.026943 master-0 kubenswrapper[31456]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 21:08:59.026943 master-0 kubenswrapper[31456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 21:08:59.026943 master-0 kubenswrapper[31456]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 21:08:59.026943 master-0 kubenswrapper[31456]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 21:08:59.026943 master-0 kubenswrapper[31456]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 21:08:59.027492 master-0 kubenswrapper[31456]: I0312 21:08:59.027038 31456 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 21:08:59.032095 master-0 kubenswrapper[31456]: W0312 21:08:59.032059 31456 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 21:08:59.032095 master-0 kubenswrapper[31456]: W0312 21:08:59.032091 31456 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 21:08:59.032095 master-0 kubenswrapper[31456]: W0312 21:08:59.032098 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032104 31456 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032110 31456 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032115 31456 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032120 31456 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032125 31456 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032129 31456 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032134 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032140 31456 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032146 31456 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032151 31456 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032155 31456 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032160 31456 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032167 31456 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032175 31456 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032180 31456 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032185 31456 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032190 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032195 31456 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 21:08:59.032212 master-0 kubenswrapper[31456]: W0312 21:08:59.032199 31456 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032203 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032208 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032213 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032220 31456 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032226 31456 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032231 31456 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032236 31456 feature_gate.go:330] unrecognized feature gate: Example Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032241 31456 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032244 31456 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032249 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032253 31456 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032259 31456 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032263 31456 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032268 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032272 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032277 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032280 31456 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032284 31456 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032288 31456 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 21:08:59.032760 master-0 kubenswrapper[31456]: W0312 21:08:59.032292 31456 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032296 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032299 31456 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032303 31456 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032307 31456 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032310 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032314 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032318 31456 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032321 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032325 31456 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032331 31456 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032336 31456 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032340 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032344 31456 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032349 31456 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032353 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032357 31456 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032360 31456 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032364 31456 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032367 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 21:08:59.033474 master-0 kubenswrapper[31456]: W0312 21:08:59.032371 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032374 31456 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032378 31456 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032381 31456 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032385 31456 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032389 31456 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032393 31456 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032396 31456 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032400 31456 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032404 31456 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: W0312 21:08:59.032407 31456 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032502 31456 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032512 31456 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032522 31456 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032528 31456 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032534 31456 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032539 31456 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032545 31456 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032551 31456 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032556 31456 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032561 31456 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032565 31456 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 21:08:59.033988 master-0 kubenswrapper[31456]: I0312 21:08:59.032570 31456 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032575 31456 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032579 31456 flags.go:64] FLAG: --cgroup-root="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032583 31456 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032588 31456 flags.go:64] FLAG: --client-ca-file="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032592 31456 flags.go:64] FLAG: --cloud-config="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032596 31456 flags.go:64] FLAG: --cloud-provider="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032600 31456 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032606 31456 flags.go:64] FLAG: --cluster-domain="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032611 31456 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032638 31456 flags.go:64] FLAG: --config-dir="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032644 31456 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032650 31456 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032655 31456 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032660 31456 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032664 31456 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032668 31456 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032673 31456 flags.go:64] FLAG: --contention-profiling="false" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032678 31456 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032683 31456 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032688 31456 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032692 31456 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032697 31456 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032701 31456 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032706 31456 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 21:08:59.034632 master-0 kubenswrapper[31456]: I0312 21:08:59.032713 31456 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032719 31456 flags.go:64] FLAG: --enable-server="true" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032723 31456 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032729 31456 flags.go:64] FLAG: --event-burst="100" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032734 31456 flags.go:64] FLAG: --event-qps="50" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032738 31456 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032742 31456 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032746 31456 flags.go:64] FLAG: --eviction-hard="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032752 31456 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032756 31456 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032760 31456 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032765 31456 flags.go:64] FLAG: --eviction-soft="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032769 31456 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032774 31456 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032779 31456 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032784 31456 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032788 31456 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032793 31456 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032797 31456 flags.go:64] FLAG: --feature-gates="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032818 31456 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032822 31456 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032827 31456 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032832 31456 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032837 31456 flags.go:64] FLAG: --healthz-port="10248" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032842 31456 flags.go:64] FLAG: --help="false" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032846 31456 flags.go:64] FLAG: --hostname-override="" Mar 12 21:08:59.035268 master-0 kubenswrapper[31456]: I0312 21:08:59.032855 31456 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032860 31456 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032865 31456 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032869 31456 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032873 31456 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032877 31456 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032893 31456 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032898 31456 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032902 31456 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032907 31456 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032914 31456 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032918 31456 flags.go:64] FLAG: --kube-reserved="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032922 31456 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032926 31456 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032931 31456 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032935 31456 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032939 31456 flags.go:64] FLAG: --lock-file="" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032943 31456 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032947 31456 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032952 31456 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032959 31456 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032963 31456 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032967 31456 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032972 31456 flags.go:64] FLAG: --logging-format="text" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032976 31456 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 21:08:59.035972 master-0 kubenswrapper[31456]: I0312 21:08:59.032981 31456 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.032986 31456 flags.go:64] FLAG: --manifest-url="" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.032990 31456 flags.go:64] FLAG: --manifest-url-header="" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.032995 31456 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033000 31456 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033007 31456 flags.go:64] FLAG: --max-pods="110" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033011 31456 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033018 31456 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033022 31456 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033027 31456 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033033 31456 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033038 31456 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033043 31456 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033059 31456 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033064 31456 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033069 31456 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033074 31456 flags.go:64] FLAG: --pod-cidr="" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033078 31456 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033086 31456 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033091 31456 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033095 31456 flags.go:64] FLAG: --pods-per-core="0" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033102 31456 flags.go:64] FLAG: --port="10250" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033106 31456 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033110 31456 flags.go:64] FLAG: --provider-id="" Mar 12 21:08:59.037037 master-0 kubenswrapper[31456]: I0312 21:08:59.033114 31456 flags.go:64] FLAG: --qos-reserved="" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033119 31456 flags.go:64] FLAG: --read-only-port="10255" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033123 31456 flags.go:64] FLAG: --register-node="true" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033128 31456 flags.go:64] FLAG: --register-schedulable="true" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033132 31456 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033140 31456 flags.go:64] FLAG: --registry-burst="10" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033144 31456 flags.go:64] FLAG: --registry-qps="5" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033149 31456 flags.go:64] FLAG: --reserved-cpus="" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033153 31456 flags.go:64] FLAG: --reserved-memory="" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033158 31456 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033162 31456 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033166 31456 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033171 31456 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033176 31456 flags.go:64] FLAG: --runonce="false" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033181 31456 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033189 31456 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033194 31456 flags.go:64] FLAG: --seccomp-default="false" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033199 31456 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033203 31456 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033208 31456 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033213 31456 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033217 31456 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033222 31456 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033226 31456 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033230 31456 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 21:08:59.037600 master-0 kubenswrapper[31456]: I0312 21:08:59.033235 31456 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033239 31456 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033243 31456 flags.go:64] FLAG: --system-cgroups="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033247 31456 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033255 31456 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033259 31456 flags.go:64] FLAG: --tls-cert-file="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033263 31456 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033270 31456 flags.go:64] FLAG: --tls-min-version="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033275 31456 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033280 31456 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033284 31456 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033288 31456 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033292 31456 flags.go:64] FLAG: --v="2" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033298 31456 flags.go:64] FLAG: --version="false" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033304 31456 flags.go:64] FLAG: --vmodule="" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033309 31456 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: I0312 21:08:59.033313 31456 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033421 31456 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033427 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033431 31456 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033435 31456 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033439 31456 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033444 31456 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033449 31456 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 21:08:59.038577 master-0 kubenswrapper[31456]: W0312 21:08:59.033454 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033460 31456 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033467 31456 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033472 31456 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033478 31456 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033485 31456 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033490 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033495 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033499 31456 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033503 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033507 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033511 31456 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033514 31456 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033518 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033522 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033525 31456 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033530 31456 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033534 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033538 31456 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 21:08:59.039228 master-0 kubenswrapper[31456]: W0312 21:08:59.033542 31456 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033548 31456 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033553 31456 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033557 31456 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033560 31456 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033565 31456 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033569 31456 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033573 31456 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033577 31456 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033581 31456 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033584 31456 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033590 31456 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033595 31456 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033599 31456 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033603 31456 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033607 31456 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033611 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033615 31456 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033620 31456 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033625 31456 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 21:08:59.039708 master-0 kubenswrapper[31456]: W0312 21:08:59.033629 31456 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033634 31456 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033638 31456 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033642 31456 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033646 31456 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033651 31456 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033656 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033661 31456 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033665 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033669 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033673 31456 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033676 31456 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033680 31456 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033684 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033688 31456 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033691 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033695 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033701 31456 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033705 31456 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033709 31456 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 21:08:59.040311 master-0 kubenswrapper[31456]: W0312 21:08:59.033715 31456 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.033719 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.033723 31456 feature_gate.go:330] unrecognized feature gate: Example Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.033729 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.033733 31456 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.033736 31456 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: I0312 21:08:59.033743 31456 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: I0312 21:08:59.039936 31456 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: I0312 21:08:59.039985 31456 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040056 31456 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040063 31456 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040067 31456 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040072 31456 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040079 31456 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040084 31456 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 21:08:59.040823 master-0 kubenswrapper[31456]: W0312 21:08:59.040087 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040092 31456 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040096 31456 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040100 31456 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040103 31456 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040107 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040111 31456 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040116 31456 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040121 31456 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040125 31456 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040129 31456 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040133 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040137 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040142 31456 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040147 31456 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040151 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040155 31456 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040159 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040163 31456 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 21:08:59.041219 master-0 kubenswrapper[31456]: W0312 21:08:59.040167 31456 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040171 31456 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040175 31456 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040178 31456 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040182 31456 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040185 31456 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040190 31456 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040194 31456 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040197 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040201 31456 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040205 31456 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040209 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040213 31456 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040216 31456 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040220 31456 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040223 31456 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040227 31456 feature_gate.go:330] unrecognized feature gate: Example Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040230 31456 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040234 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040237 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 21:08:59.042030 master-0 kubenswrapper[31456]: W0312 21:08:59.040273 31456 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040277 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040281 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040285 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040289 31456 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040307 31456 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040313 31456 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040317 31456 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040321 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040325 31456 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040330 31456 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040333 31456 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040337 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040341 31456 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040345 31456 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040349 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040352 31456 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040356 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040360 31456 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040364 31456 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 21:08:59.042654 master-0 kubenswrapper[31456]: W0312 21:08:59.040367 31456 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040391 31456 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040396 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040400 31456 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040404 31456 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040408 31456 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040412 31456 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: I0312 21:08:59.040419 31456 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040688 31456 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040697 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040701 31456 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040705 31456 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040709 31456 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040713 31456 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040717 31456 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 21:08:59.043164 master-0 kubenswrapper[31456]: W0312 21:08:59.040721 31456 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040724 31456 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040728 31456 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040797 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040801 31456 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040830 31456 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040836 31456 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040842 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040846 31456 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040851 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040856 31456 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040888 31456 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040892 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040896 31456 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040899 31456 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040903 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040907 31456 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040910 31456 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040915 31456 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 21:08:59.043544 master-0 kubenswrapper[31456]: W0312 21:08:59.040919 31456 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040923 31456 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040927 31456 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040931 31456 feature_gate.go:330] unrecognized feature gate: Example Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040934 31456 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040964 31456 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040970 31456 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040975 31456 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040979 31456 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040983 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040986 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040991 31456 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.040996 31456 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041000 31456 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041004 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041008 31456 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041013 31456 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041018 31456 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041046 31456 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 21:08:59.044114 master-0 kubenswrapper[31456]: W0312 21:08:59.041051 31456 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041055 31456 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041060 31456 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041063 31456 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041067 31456 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041071 31456 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041075 31456 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041079 31456 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041083 31456 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041087 31456 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041090 31456 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041094 31456 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041098 31456 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041125 31456 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041129 31456 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041133 31456 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041137 31456 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041142 31456 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041145 31456 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041149 31456 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 21:08:59.044571 master-0 kubenswrapper[31456]: W0312 21:08:59.041152 31456 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: W0312 21:08:59.041156 31456 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: W0312 21:08:59.041160 31456 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: W0312 21:08:59.041163 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: W0312 21:08:59.041168 31456 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: W0312 21:08:59.041171 31456 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: W0312 21:08:59.041175 31456 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.041205 31456 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.041449 31456 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.043018 31456 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.043105 31456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.043328 31456 server.go:997] "Starting client certificate rotation" Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.043338 31456 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 21:08:59.045155 master-0 kubenswrapper[31456]: I0312 21:08:59.043485 31456 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 16:45:49.724201337 +0000 UTC Mar 12 21:08:59.045483 master-0 kubenswrapper[31456]: I0312 21:08:59.043570 31456 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h36m50.680634069s for next certificate rotation Mar 12 21:08:59.045483 master-0 kubenswrapper[31456]: I0312 21:08:59.044080 31456 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 21:08:59.045895 master-0 kubenswrapper[31456]: I0312 21:08:59.045760 31456 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 21:08:59.048505 master-0 kubenswrapper[31456]: I0312 21:08:59.048475 31456 log.go:25] "Validated CRI v1 runtime API" Mar 12 21:08:59.052897 master-0 kubenswrapper[31456]: I0312 21:08:59.052863 31456 log.go:25] "Validated CRI v1 image API" Mar 12 21:08:59.053948 master-0 kubenswrapper[31456]: I0312 21:08:59.053914 31456 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 21:08:59.063438 master-0 kubenswrapper[31456]: I0312 21:08:59.063380 31456 fs.go:135] Filesystem UUIDs: map[6486df99-a83a-4de4-8a94-6816f327ffeb:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 12 21:08:59.064432 master-0 kubenswrapper[31456]: I0312 21:08:59.063424 31456 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0f3550a8aec9a486ca0cee3183a0d557f3a6f7dd69b026fe601996e8ee871591/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0f3550a8aec9a486ca0cee3183a0d557f3a6f7dd69b026fe601996e8ee871591/userdata/shm major:0 minor:834 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/12893a728732446f94ca8814579a35744128ccd4319c3c765ac2be173f953384/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/12893a728732446f94ca8814579a35744128ccd4319c3c765ac2be173f953384/userdata/shm major:0 minor:773 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3/userdata/shm major:0 minor:898 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363/userdata/shm major:0 minor:757 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/201b5e76d89b86f520d80ea9c46f6a7725c7ca002a8f03f0377c76479fd51041/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/201b5e76d89b86f520d80ea9c46f6a7725c7ca002a8f03f0377c76479fd51041/userdata/shm major:0 minor:471 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2367b2036b6ee449144934121f0846ae9e3677f2ee334526852b810631391c36/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2367b2036b6ee449144934121f0846ae9e3677f2ee334526852b810631391c36/userdata/shm major:0 minor:620 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2fe791136ae6341fcef221b6feb3d2b2b4ae3ce3632fb3ef2ce720ffd2630304/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2fe791136ae6341fcef221b6feb3d2b2b4ae3ce3632fb3ef2ce720ffd2630304/userdata/shm major:0 minor:420 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/305e45867f0f5c512d8dca3c39de15088c17eab90b2969aafd739643c4b112ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/305e45867f0f5c512d8dca3c39de15088c17eab90b2969aafd739643c4b112ce/userdata/shm major:0 minor:93 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/334e8afc68a931f6350a0d282fa03b4333bfc31875bef1101770c4d5b423d760/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/334e8afc68a931f6350a0d282fa03b4333bfc31875bef1101770c4d5b423d760/userdata/shm major:0 minor:373 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1/userdata/shm major:0 minor:765 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a/userdata/shm major:0 minor:443 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f2fe9b256b0661c08a4a3ada19e5a95335c69cff21bdc38412e044b0f329672/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f2fe9b256b0661c08a4a3ada19e5a95335c69cff21bdc38412e044b0f329672/userdata/shm major:0 minor:1020 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5/userdata/shm major:0 minor:331 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4/userdata/shm major:0 minor:839 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de/userdata/shm major:0 minor:242 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662/userdata/shm major:0 minor:1196 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e/userdata/shm major:0 minor:523 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630/userdata/shm major:0 minor:382 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64bbce37fffa0363fa6b0cb6661a450dd4f178dfa993fa7e87ca9427175696e1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64bbce37fffa0363fa6b0cb6661a450dd4f178dfa993fa7e87ca9427175696e1/userdata/shm major:0 minor:843 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87/userdata/shm major:0 minor:424 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e/userdata/shm major:0 minor:633 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81/userdata/shm major:0 minor:383 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199/userdata/shm major:0 minor:1090 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82318439026f9141cf283c68c9e568172986f95b3ac1b221e6be4eb35afea5e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82318439026f9141cf283c68c9e568172986f95b3ac1b221e6be4eb35afea5e2/userdata/shm major:0 minor:465 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8436e30f10a58f1975835cc423f1f4b55df282dbfa2eb60a4b2dbe459e6cb442/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8436e30f10a58f1975835cc423f1f4b55df282dbfa2eb60a4b2dbe459e6cb442/userdata/shm major:0 minor:612 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/85f9c6fdf5bd5b95a4e9ca273a39f24bdd11f231f86bdf7cf1f6b3ef19542031/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/85f9c6fdf5bd5b95a4e9ca273a39f24bdd11f231f86bdf7cf1f6b3ef19542031/userdata/shm major:0 minor:328 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6/userdata/shm major:0 minor:81 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8792e1c546b62b1a483dc750f90553c923da596394a484fb6a82db67b2323633/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8792e1c546b62b1a483dc750f90553c923da596394a484fb6a82db67b2323633/userdata/shm major:0 minor:581 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/898949022ca2ee68db161a1e164f2382a1563f2d65322832aa8c78dd1630a7b1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/898949022ca2ee68db161a1e164f2382a1563f2d65322832aa8c78dd1630a7b1/userdata/shm major:0 minor:792 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4/userdata/shm major:0 minor:288 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c3da632c5f18897e9ef4fc639ad267aa15c88d97788e82ab67a1bdff6b3ccb6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c3da632c5f18897e9ef4fc639ad267aa15c88d97788e82ab67a1bdff6b3ccb6/userdata/shm major:0 minor:539 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a1961e84ee3c3ec3f1933eb0bcae9c2d6f72599a10fb64dc194d15bf1b838126/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a1961e84ee3c3ec3f1933eb0bcae9c2d6f72599a10fb64dc194d15bf1b838126/userdata/shm major:0 minor:614 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406/userdata/shm major:0 minor:1014 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83/userdata/shm major:0 minor:235 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8a8fe5d5bb4822dd7daf58bc0b49057e47a6aa6fcd9e303e14168c98652cb42/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8a8fe5d5bb4822dd7daf58bc0b49057e47a6aa6fcd9e303e14168c98652cb42/userdata/shm major:0 minor:841 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9ba476328193f4cef8e964926dcec3d1d9ce3f4dd043deca9d859ee90a08d2e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9ba476328193f4cef8e964926dcec3d1d9ce3f4dd043deca9d859ee90a08d2e/userdata/shm major:0 minor:617 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa41b0d7c32641cd054893d0403c77199788601eccf56bdc2a5e82822618fbea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa41b0d7c32641cd054893d0403c77199788601eccf56bdc2a5e82822618fbea/userdata/shm major:0 minor:415 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/abeff81e503300fd28292fa3a775f0ca878a822311085f8ea3036c4d769c1e10/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/abeff81e503300fd28292fa3a775f0ca878a822311085f8ea3036c4d769c1e10/userdata/shm major:0 minor:618 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402/userdata/shm major:0 minor:1085 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3/userdata/shm major:0 minor:543 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354/userdata/shm major:0 minor:720 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf/userdata/shm major:0 minor:1046 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9/userdata/shm major:0 minor:1058 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6/userdata/shm major:0 minor:239 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa/userdata/shm major:0 minor:1140 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ce789d8b3134f292701ad6a9879595b336f1a9ddf70665a346e7b380d821900d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ce789d8b3134f292701ad6a9879595b336f1a9ddf70665a346e7b380d821900d/userdata/shm major:0 minor:619 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5/userdata/shm major:0 minor:587 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d50dfd713474f3f9326230f15b9aa86b517e198f4cbc3bcfca21ce09a517313c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d50dfd713474f3f9326230f15b9aa86b517e198f4cbc3bcfca21ce09a517313c/userdata/shm major:0 minor:1081 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b/userdata/shm major:0 minor:914 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc9a8ab3dbf9f510346d66800b49bfb55e672501ce824087dcdec36983ec6646/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc9a8ab3dbf9f510346d66800b49bfb55e672501ce824087dcdec36983ec6646/userdata/shm major:0 minor:830 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6/userdata/shm major:0 minor:363 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ea7954299aa7bc681bbf2b7473af9292483dacae799b21a6511a23f7d0fb2fd7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ea7954299aa7bc681bbf2b7473af9292483dacae799b21a6511a23f7d0fb2fd7/userdata/shm major:0 minor:1018 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4/userdata/shm major:0 minor:777 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f32413943fd7e46b94ba71c016cbccc87f018a39f90dbf119089416f4d147bd9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f32413943fd7e46b94ba71c016cbccc87f018a39f90dbf119089416f4d147bd9/userdata/shm major:0 minor:769 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749/userdata/shm major:0 minor:814 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57/userdata/shm major:0 minor:844 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f6412ec366e621f5d99b6ef5fdb5da3a73dfb0709a661b8764731c1f9e4f0f11/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f6412ec366e621f5d99b6ef5fdb5da3a73dfb0709a661b8764731c1f9e4f0f11/userdata/shm major:0 minor:832 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fafb7230532430a0db8a7bc3a9035465334c92f98efee0c32c29c3f4d6ecbcfd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fafb7230532430a0db8a7bc3a9035465334c92f98efee0c32c29c3f4d6ecbcfd/userdata/shm major:0 minor:378 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~projected/kube-api-access-k5v9f:{mountpoint:/var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~projected/kube-api-access-k5v9f major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:605 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05fd1378-3935-4caf-96c5-17cf7e29417f/volumes/kubernetes.io~projected/kube-api-access-8xxkr:{mountpoint:/var/lib/kubelet/pods/05fd1378-3935-4caf-96c5-17cf7e29417f/volumes/kubernetes.io~projected/kube-api-access-8xxkr major:0 minor:826 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/05fd1378-3935-4caf-96c5-17cf7e29417f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/05fd1378-3935-4caf-96c5-17cf7e29417f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~projected/kube-api-access-tm7d5:{mountpoint:/var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~projected/kube-api-access-tm7d5 major:0 minor:787 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:781 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~secret/webhook-cert major:0 minor:782 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~projected/kube-api-access-bhcsd:{mountpoint:/var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~projected/kube-api-access-bhcsd major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~secret/srv-cert major:0 minor:599 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~projected/kube-api-access-z9xld:{mountpoint:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~projected/kube-api-access-z9xld major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/135ec6f3-fbc0-4840-a4b1-c1124c705161/volumes/kubernetes.io~projected/kube-api-access-wsprq:{mountpoint:/var/lib/kubelet/pods/135ec6f3-fbc0-4840-a4b1-c1124c705161/volumes/kubernetes.io~projected/kube-api-access-wsprq major:0 minor:385 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/135ec6f3-fbc0-4840-a4b1-c1124c705161/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/135ec6f3-fbc0-4840-a4b1-c1124c705161/volumes/kubernetes.io~secret/signing-key major:0 minor:384 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~projected/kube-api-access-mbbc5:{mountpoint:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~projected/kube-api-access-mbbc5 major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~projected/kube-api-access-lrm2z:{mountpoint:/var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~projected/kube-api-access-lrm2z major:0 minor:837 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~secret/cert major:0 minor:810 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:836 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~projected/kube-api-access-b9z6l:{mountpoint:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~projected/kube-api-access-b9z6l major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce/volumes/kubernetes.io~projected/kube-api-access-vcmzz:{mountpoint:/var/lib/kubelet/pods/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce/volumes/kubernetes.io~projected/kube-api-access-vcmzz major:0 minor:576 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2604b035-853c-42b7-a562-07d46178868a/volumes/kubernetes.io~projected/kube-api-access-clp9l:{mountpoint:/var/lib/kubelet/pods/2604b035-853c-42b7-a562-07d46178868a/volumes/kubernetes.io~projected/kube-api-access-clp9l major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/kube-api-access-8vvf6:{mountpoint:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/kube-api-access-8vvf6 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~secret/metrics-tls major:0 minor:436 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31747c5d-7e29-4a74-b8d5-3d8efa5e900b/volumes/kubernetes.io~projected/kube-api-access-l2bmh:{mountpoint:/var/lib/kubelet/pods/31747c5d-7e29-4a74-b8d5-3d8efa5e900b/volumes/kubernetes.io~projected/kube-api-access-l2bmh major:0 minor:556 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31747c5d-7e29-4a74-b8d5-3d8efa5e900b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/31747c5d-7e29-4a74-b8d5-3d8efa5e900b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:578 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32050f14-1939-41bf-a824-22016b90c189/volumes/kubernetes.io~projected/kube-api-access-pbnbs:{mountpoint:/var/lib/kubelet/pods/32050f14-1939-41bf-a824-22016b90c189/volumes/kubernetes.io~projected/kube-api-access-pbnbs major:0 minor:403 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32050f14-1939-41bf-a824-22016b90c189/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/32050f14-1939-41bf-a824-22016b90c189/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:402 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~projected/kube-api-access-clmjl:{mountpoint:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~projected/kube-api-access-clmjl major:0 minor:1139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1133 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~projected/kube-api-access-fmcxd:{mountpoint:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~projected/kube-api-access-fmcxd major:0 minor:464 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/encryption-config major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/etcd-client major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/serving-cert major:0 minor:421 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/400a13b5-c489-4beb-af33-94e635b86148/volumes/kubernetes.io~projected/kube-api-access-vt627:{mountpoint:/var/lib/kubelet/pods/400a13b5-c489-4beb-af33-94e635b86148/volumes/kubernetes.io~projected/kube-api-access-vt627 major:0 minor:893 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/400a13b5-c489-4beb-af33-94e635b86148/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/400a13b5-c489-4beb-af33-94e635b86148/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:897 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~projected/kube-api-access-8rjm8:{mountpoint:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~projected/kube-api-access-8rjm8 major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~secret/serving-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c589179-0df4-4fe8-bfdd-965c3e7652c5/volumes/kubernetes.io~projected/kube-api-access-pbqfz:{mountpoint:/var/lib/kubelet/pods/4c589179-0df4-4fe8-bfdd-965c3e7652c5/volumes/kubernetes.io~projected/kube-api-access-pbqfz major:0 minor:772 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~projected/kube-api-access-7gg7v:{mountpoint:/var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~projected/kube-api-access-7gg7v major:0 minor:1080 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/508cb83e-6f25-4235-8c56-b25b762ebcad/volumes/kubernetes.io~projected/kube-api-access-s4jzt:{mountpoint:/var/lib/kubelet/pods/508cb83e-6f25-4235-8c56-b25b762ebcad/volumes/kubernetes.io~projected/kube-api-access-s4jzt major:0 minor:819 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/508cb83e-6f25-4235-8c56-b25b762ebcad/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/508cb83e-6f25-4235-8c56-b25b762ebcad/volumes/kubernetes.io~secret/proxy-tls major:0 minor:813 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:530 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~empty-dir/tmp major:0 minor:529 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~projected/kube-api-access-hlt7h:{mountpoint:/var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~projected/kube-api-access-hlt7h major:0 minor:534 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~projected/kube-api-access-kzwrw:{mountpoint:/var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~projected/kube-api-access-kzwrw major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:607 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~projected/kube-api-access-f7rrv:{mountpoint:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~projected/kube-api-access-f7rrv major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/etcd-client major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/serving-cert major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/567a9a33-1a82-4c48-b541-7e0eaae11f57/volumes/kubernetes.io~projected/kube-api-access-nzn6t:{mountpoint:/var/lib/kubelet/pods/567a9a33-1a82-4c48-b541-7e0eaae11f57/volumes/kubernetes.io~projected/kube-api-access-nzn6t major:0 minor:770 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ad63582-bd60-41a1-9622-ee73ccf8a5e8/volumes/kubernetes.io~projected/kube-api-access-csxwl:{mountpoint:/var/lib/kubelet/pods/5ad63582-bd60-41a1-9622-ee73ccf8a5e8/volumes/kubernetes.io~projected/kube-api-access-csxwl major:0 minor:317 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/617f0f9c-50d5-4214-b30f-5110fd4399ec/volumes/kubernetes.io~projected/kube-api-access-f2r2r:{mountpoint:/var/lib/kubelet/pods/617f0f9c-50d5-4214-b30f-5110fd4399ec/volumes/kubernetes.io~projected/kube-api-access-f2r2r major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67e68ff0-f54d-4973-bbe7-ed43ce542bc0/volumes/kubernetes.io~projected/kube-api-access-tpf99:{mountpoint:/var/lib/kubelet/pods/67e68ff0-f54d-4973-bbe7-ed43ce542bc0/volumes/kubernetes.io~projected/kube-api-access-tpf99 major:0 minor:820 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67e68ff0-f54d-4973-bbe7-ed43ce542bc0/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/67e68ff0-f54d-4973-bbe7-ed43ce542bc0/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~projected/kube-api-access-qqhhz:{mountpoint:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~projected/kube-api-access-qqhhz major:0 minor:427 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/encryption-config major:0 minor:410 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/etcd-client major:0 minor:426 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/serving-cert major:0 minor:409 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e54b24-bf9d-42a8-b012-c7b073c6f6a6/volumes/kubernetes.io~projected/kube-api-access-mfsvw:{mountpoint:/var/lib/kubelet/pods/70e54b24-bf9d-42a8-b012-c7b073c6f6a6/volumes/kubernetes.io~projected/kube-api-access-mfsvw major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~projected/kube-api-access-q78vj:{mountpoint:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~projected/kube-api-access-q78vj major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~secret/serving-cert major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~projected/kube-api-access-mp84p:{mountpoint:/var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~projected/kube-api-access-mp84p major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~secret/serving-cert major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f3afe47-c537-420c-b5be-1cad612e119d/volumes/kubernetes.io~projected/kube-api-access-8745n:{mountpoint:/var/lib/kubelet/pods/7f3afe47-c537-420c-b5be-1cad612e119d/volumes/kubernetes.io~projected/kube-api-access-8745n major:0 minor:763 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f3afe47-c537-420c-b5be-1cad612e119d/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/7f3afe47-c537-420c-b5be-1cad612e119d/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83368183-0368-44b1-9387-eed32b211988/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/83368183-0368-44b1-9387-eed32b211988/volumes/kubernetes.io~projected/kube-api-access major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/83368183-0368-44b1-9387-eed32b211988/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/83368183-0368-44b1-9387-eed32b211988/volumes/kubernetes.io~secret/serving-cert major:0 minor:579 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~projected/kube-api-access-6j7lq:{mountpoint:/var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~projected/kube-api-access-6j7lq major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:441 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b96dd10-18a0-49f8-b488-63fc2b23da39/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/8b96dd10-18a0-49f8-b488-63fc2b23da39/volumes/kubernetes.io~projected/ca-certs major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b96dd10-18a0-49f8-b488-63fc2b23da39/volumes/kubernetes.io~projected/kube-api-access-nhhdz:{mountpoint:/var/lib/kubelet/pods/8b96dd10-18a0-49f8-b488-63fc2b23da39/volumes/kubernetes.io~projected/kube-api-access-nhhdz major:0 minor:538 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/kube-api-access-rvkp7:{mountpoint:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/kube-api-access-rvkp7 major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72/volumes/kubernetes.io~projected/kube-api-access-2rfn6:{mountpoint:/var/lib/kubelet/pods/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72/volumes/kubernetes.io~projected/kube-api-access-2rfn6 major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72/volumes/kubernetes.io~secret/proxy-tls major:0 minor:988 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90f16d8c-25b6-4827-85d9-0995e4e1ab15/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/90f16d8c-25b6-4827-85d9-0995e4e1ab15/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1010 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~secret/serving-cert major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~projected/kube-api-access-2wt5q:{mountpoint:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~projected/kube-api-access-2wt5q major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~secret/serving-cert major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~projected/kube-api-access-2lltk:{mountpoint:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~projected/kube-api-access-2lltk major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:437 fsType:tmpfs blockSize: Mar 12 21:08:59.064729 master-0 kubenswrapper[31456]: 0} /var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~projected/kube-api-access-258hz:{mountpoint:/var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~projected/kube-api-access-258hz major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~secret/srv-cert major:0 minor:606 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~projected/kube-api-access-577p4:{mountpoint:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~projected/kube-api-access-577p4 major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~secret/serving-cert major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8/volumes/kubernetes.io~projected/kube-api-access-7bk7q:{mountpoint:/var/lib/kubelet/pods/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8/volumes/kubernetes.io~projected/kube-api-access-7bk7q major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~projected/kube-api-access-lf28c:{mountpoint:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~projected/kube-api-access-lf28c major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/default-certificate major:0 minor:1009 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/stats-auth major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~projected/kube-api-access-2w68c:{mountpoint:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~projected/kube-api-access-2w68c major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918/volumes/kubernetes.io~projected/kube-api-access-xth7s:{mountpoint:/var/lib/kubelet/pods/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918/volumes/kubernetes.io~projected/kube-api-access-xth7s major:0 minor:319 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918/volumes/kubernetes.io~secret/cert major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc/volumes/kubernetes.io~projected/kube-api-access-n555w:{mountpoint:/var/lib/kubelet/pods/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc/volumes/kubernetes.io~projected/kube-api-access-n555w major:0 minor:768 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc/volumes/kubernetes.io~secret/serving-cert major:0 minor:762 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~projected/kube-api-access-dkvxh:{mountpoint:/var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~projected/kube-api-access-dkvxh major:0 minor:1045 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~secret/certs major:0 minor:1037 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1036 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes/kubernetes.io~projected/kube-api-access-bcjsq:{mountpoint:/var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes/kubernetes.io~projected/kube-api-access-bcjsq major:0 minor:327 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes/kubernetes.io~secret/serving-cert major:0 minor:69 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71376ea-e248-48fc-b2c4-1de7236ddd31/volumes/kubernetes.io~projected/kube-api-access-nlrzs:{mountpoint:/var/lib/kubelet/pods/b71376ea-e248-48fc-b2c4-1de7236ddd31/volumes/kubernetes.io~projected/kube-api-access-nlrzs major:0 minor:838 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71376ea-e248-48fc-b2c4-1de7236ddd31/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/b71376ea-e248-48fc-b2c4-1de7236ddd31/volumes/kubernetes.io~secret/cert major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b7229c42-b6bc-4ea9-946c-71a4117f53e9/volumes/kubernetes.io~projected/kube-api-access-xx5m2:{mountpoint:/var/lib/kubelet/pods/b7229c42-b6bc-4ea9-946c-71a4117f53e9/volumes/kubernetes.io~projected/kube-api-access-xx5m2 major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes/kubernetes.io~projected/kube-api-access-nbcts:{mountpoint:/var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes/kubernetes.io~projected/kube-api-access-nbcts major:0 minor:1195 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1182 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~projected/kube-api-access-jrk7w:{mountpoint:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~projected/kube-api-access-jrk7w major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~projected/kube-api-access-zlch7:{mountpoint:/var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~projected/kube-api-access-zlch7 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~secret/metrics-certs major:0 minor:608 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc7b96ab-01af-442a-8eda-fc59e665a367/volumes/kubernetes.io~projected/kube-api-access-vwqbt:{mountpoint:/var/lib/kubelet/pods/cc7b96ab-01af-442a-8eda-fc59e665a367/volumes/kubernetes.io~projected/kube-api-access-vwqbt major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~projected/ca-certs major:0 minor:531 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~projected/kube-api-access-x8hp5:{mountpoint:/var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~projected/kube-api-access-x8hp5 major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:532 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/volumes/kubernetes.io~projected/kube-api-access-mfspc:{mountpoint:/var/lib/kubelet/pods/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/volumes/kubernetes.io~projected/kube-api-access-mfspc major:0 minor:377 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6eace9f-a52d-4570-a932-959538e1f2bc/volumes/kubernetes.io~projected/kube-api-access-8l8qp:{mountpoint:/var/lib/kubelet/pods/d6eace9f-a52d-4570-a932-959538e1f2bc/volumes/kubernetes.io~projected/kube-api-access-8l8qp major:0 minor:779 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes/kubernetes.io~projected/kube-api-access-f2mk7:{mountpoint:/var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes/kubernetes.io~projected/kube-api-access-f2mk7 major:0 minor:326 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes/kubernetes.io~secret/serving-cert major:0 minor:83 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~projected/kube-api-access-jx64q:{mountpoint:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~projected/kube-api-access-jx64q major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9152bd6-f203-469b-97fa-db274e43b40c/volumes/kubernetes.io~projected/kube-api-access-q9txs:{mountpoint:/var/lib/kubelet/pods/d9152bd6-f203-469b-97fa-db274e43b40c/volumes/kubernetes.io~projected/kube-api-access-q9txs major:0 minor:913 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9152bd6-f203-469b-97fa-db274e43b40c/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d9152bd6-f203-469b-97fa-db274e43b40c/volumes/kubernetes.io~secret/proxy-tls major:0 minor:909 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da40e787-dd75-4f4f-b09e-a8dab590f260/volumes/kubernetes.io~projected/kube-api-access-xg2ph:{mountpoint:/var/lib/kubelet/pods/da40e787-dd75-4f4f-b09e-a8dab590f260/volumes/kubernetes.io~projected/kube-api-access-xg2ph major:0 minor:368 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/volumes/kubernetes.io~projected/kube-api-access-8ddw4:{mountpoint:/var/lib/kubelet/pods/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/volumes/kubernetes.io~projected/kube-api-access-8ddw4 major:0 minor:791 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~projected/kube-api-access-c5c6t:{mountpoint:/var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~projected/kube-api-access-c5c6t major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:609 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~projected/kube-api-access-4l2sm:{mountpoint:/var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~projected/kube-api-access-4l2sm major:0 minor:1056 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1052 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1057 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~projected/kube-api-access-2hvwg:{mountpoint:/var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~projected/kube-api-access-2hvwg major:0 minor:1076 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1074 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1087 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f8467055-c9c9-4485-bb60-9a79e8b91268/volumes/kubernetes.io~projected/kube-api-access-gp4mt:{mountpoint:/var/lib/kubelet/pods/f8467055-c9c9-4485-bb60-9a79e8b91268/volumes/kubernetes.io~projected/kube-api-access-gp4mt major:0 minor:767 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f8467055-c9c9-4485-bb60-9a79e8b91268/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/f8467055-c9c9-4485-bb60-9a79e8b91268/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~projected/kube-api-access-2kng9:{mountpoint:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~projected/kube-api-access-2kng9 major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} overlay_0-1000:{mountpoint:/var/lib/containers/storage/overlay/6c95dee65e2980dbf11e9d347a2fed99a2387e77bc3c0346e7aed342d027b884/merged major:0 minor:1000 fsType:overlay blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/3c5e65f132f973adb26144c858c2cc2e0295d9c60ff130118c58a7e75f6214e1/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/cde19b4c4b5368887705aa0f78b833fe6ce0c6e8e695713372f9298152beba7e/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/6a2f1369b57181f1cbf9998644dd74724c5b6a1130252684b5a482090c9ed593/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1022:{mountpoint:/var/lib/containers/storage/overlay/d2386a820f82193ebd18b136366fcbfa915dff31b22b46eb62cc0214a63da62d/merged major:0 minor:1022 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/e2b87537225ddcae94103118b41cf687bcf6e91e7fb73617c4e19e49a9c7f471/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/f4486c2fa6f9dfc7fbe6d213644801eb0622fe482928ee59385921aad2b8f8e7/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-1033:{mountpoint:/var/lib/containers/storage/overlay/748e9f32b900dbe4656af4758f1dcfd20235e8c319e5b47751b3151a556cffe3/merged major:0 minor:1033 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/dfa7244ec22da629ce205168bb490cde41d28234a3dc40e0d25559244baeb025/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/d07b96a8708cf17e58960ca5e27c13078d57679430dd5c4bf4c8a3bb9787ada9/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1048:{mountpoint:/var/lib/containers/storage/overlay/923c79d8877cc85478af41efc7d019f95f0298a2e8c83ce555ee2bd9ab803bbc/merged major:0 minor:1048 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/33fc7b1b7485a06449d48257f44e445798d5bac13f19c5829fd44320472957e1/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1060:{mountpoint:/var/lib/containers/storage/overlay/bc7ba88b0585a872f205785eabf16d58934bd8d8677a5636bff5b8d6872156d5/merged major:0 minor:1060 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/bfd94f590b140964ef64d6495badc66c8b616c75b5e65fc441992bb9d656ea99/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1064:{mountpoint:/var/lib/containers/storage/overlay/675150ac6ff0a94ecab8c8d2f4c839ae15dbc9730ab18d1864f550d4c7fecf2c/merged major:0 minor:1064 fsType:overlay blockSize:0} overlay_0-1083:{mountpoint:/var/lib/containers/storage/overlay/7c4d696be5f1a09b8f055f7094105702dd1b871cf565a29f63f9735ac26d045a/merged major:0 minor:1083 fsType:overlay blockSize:0} overlay_0-1088:{mountpoint:/var/lib/containers/storage/overlay/711954e0d927e7b2045e4c555d537e9f6c56a060f8783b212f418903e85782d8/merged major:0 minor:1088 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/60808aeb1eb6903f73add2c2d8b651dea2a098333154ff85a9e23784fa2d83a3/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/75ca329b9d4a45e27cb5f9c11afa095038956ef8bd53bf89d7fd1e9b06fdc63d/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1096:{mountpoint:/var/lib/containers/storage/overlay/25c459b3cdcb0df86acb9e3221a20d97a90976427650a040ea7e9777d2b312f0/merged major:0 minor:1096 fsType:overlay blockSize:0} overlay_0-1098:{mountpoint:/var/lib/containers/storage/overlay/6ab77a291abe94f4eeff88b9e30d8dad38a82a4abdcc5be9fe6dc9bfaa1430cc/merged major:0 minor:1098 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/b2ff3998b866109de7e3fc86acb1af07beb8e32c3630691045dfb6b10922cf4a/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1104:{mountpoint:/var/lib/containers/storage/overlay/20ee6b2208dc06900148033dd325cc3d5c56a7def65399c374faf70765d77fff/merged major:0 minor:1104 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/572d00219edaa479f72db8ee6f0b959cc75860e2955b489b7d8a7c73e9cac35d/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-1112:{mountpoint:/var/lib/containers/storage/overlay/23ce29633f110c66e6af8f9ede3dc99d53642bfd666ea4a67fa1c5e85a9d5e79/merged major:0 minor:1112 fsType:overlay blockSize:0} overlay_0-1114:{mountpoint:/var/lib/containers/storage/overlay/4b2f2aaf84e77b20037772428ba50701c0bb9912324048ff493e3176b481e1ce/merged major:0 minor:1114 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/a05c5a47df8504f461281c307ed4875290357c6b8caffaef30e58887e1de0dde/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/a76ee699783abe8a09a0edc3541e14b50d820df18a2aebc8a359f988f1bc626e/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1128:{mountpoint:/var/lib/containers/storage/overlay/dc391e82b1862fa8db22314dc6ef011df7d77585d2c93666aacbb779a190a2b8/merged major:0 minor:1128 fsType:overlay blockSize:0} overlay_0-1142:{mountpoint:/var/lib/containers/storage/overlay/4def719ad56cf9798083e3807bb5b91e0e26766782cf9d192e7a26637f278dda/merged major:0 minor:1142 fsType:overlay blockSize:0} overlay_0-1144:{mountpoint:/var/lib/containers/storage/overlay/6e7ce295bdeb04e2a2a1117c15e623a901471552b60f8d5e330e058fa7ea6a67/merged major:0 minor:1144 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/ccb9f09f0af354e400b000734708596314540650c6d69dad62cd8da64da8f495/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/d5ff82bdec5a2ca10fb511fcf89c36920bb8089767880c83ea6b47d7d28f39f5/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1173:{mountpoint:/var/lib/containers/storage/overlay/b2119770a35ea9c914e7482118d138157d83465854d8681a0bab5a17b0c73272/merged major:0 minor:1173 fsType:overlay blockSize:0} overlay_0-1178:{mountpoint:/var/lib/containers/storage/overlay/2c9630b475e8ffa462d63c1a4034e27f8cddb0bd629edcc761606f96b579e715/merged major:0 minor:1178 fsType:overlay blockSize:0} overlay_0-1198:{mountpoint:/var/lib/containers/storage/overlay/b5671dfa498a0a2ee47dfaf01efcef49e71c468b0920421de227cd4694fcdcd0/merged major:0 minor:1198 fsType:overlay blockSize:0} overlay_0-1200:{mountpoint:/var/lib/containers/storage/overlay/533c0bc882ca2209e1f61ae00ee4fdad1ecb5b815c960b6f6bce4fda5c505bd4/merged major:0 minor:1200 fsType:overlay blockSize:0} overlay_0-1202:{mountpoint:/var/lib/containers/storage/overlay/25a7a58cb362ac91797d070e80e6c10ccfe04549b93b708f56cab4c78cb0e824/merged major:0 minor:1202 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/ed7d513af8ecab5a616b65b487f51eeeabf9332d79742ccc06e55b557ad910e4/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/16853c595027b8619ae53c140e3b9e784af26e21c2b9ca8fa290447d9e87a354/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/513e0f2371f040ee25685a410ca55c2a19aaf3bb420daafa8c17d089d34452ae/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/462d83d2369d0fccb6af59deb1524cc92e7b50a03b195266273361d19ce1a85a/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/a0cd6eee352320c76fc77cedb717fd1237e63101ef10ea1aed2d9715d1c2800b/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/eacbad8b93da7fa22082cca8eae055d06118e2a55514b1fffac2c38e0803f994/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/a2b58b8f278f37f3fd08aba9023534896e1cf53939c895539d2f34c5c7bbbe99/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/aa15e523c9f9a1f7f445f160ba12eb70bcd05869c2e460906353d4cba617595f/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/026a285cb21045954bf281688c9f5aefb338e593107bdb48a0ff7d03b33bbcb6/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/db4d258f3de8c1387590563998ec0503049482de91579ce56a0c4b3d70aa78f7/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/51ff99408a3c4de10a60d75616037e37886e31be76e932af169978ecb59e3776/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/ff98f4bd9a7b8b5782fd12d152fb3c0158eac033679043242b0500ab55f82526/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/1a32d15c7c38ca9ad96681a53fa16e75fea1694a6fb549a5a97f7aa881285d9a/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/f8b81d29e75bcebed9699b19533287efe21bf5d778092bbf8f9edd5f70961e86/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/28857a049862cef25c3e0859973b956b4cf7e285b027f7e200eb189c2cbfafc7/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/735df3671552a50941bbd4fbb2e15964f4ef625b0c5adcff60c5c3aa0b08703b/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/6e602963598cf529b6f6159d2bd89ab8036d6f5c529669517e74cae8b04de374/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/757eec120d44dcd5735e5df59dd36d1f6ea40dcc378cb507b8b38614a7dd1d6b/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/a89139161b9389850b29349524ce397c2ff057e71e5ca610a6995e559135bf92/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/f10f00811283dd5d5cdc1d96c72dfa042cd2f87bc8e322f518ade9fd0f8cd550/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/6352b84e38d59386c15fa523f71d95f83a2c8ac87d20afca345c5db9ad9dac54/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/e28d754e1fdc37f55858ff407bfb1703651a3d88ed5342c724118968e7923961/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/43b322f171e7409d6f856ca488792929358e184e572b955367edbca7ccefca78/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/ae6a5430fef0ae036fab54e4b9777379346760e49756e2d30634a23d7b1dad5d/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/c3ccc73e2b4b1bcfba0f030f594bdfb9add625fd502ff554de1bd4055660b662/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/7a8ee2e82f3052ee05a40f58f96172cba12cae79bd4971a5558f1d75df2f2279/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/39ec76d835f155caae5658f5c33b0b7d480baac0c41d0daaa02db92c0928e59c/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/9cfc3b21a7831509e5d45199f9e6bfd07b79fdb39e7e71b560508dcbd1a86598/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/2daf220bfb239dfc9e7cd9fc71a1226ef6ba5e69b5e7eebf81b8c5553c13d73a/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/3257de3ede3f280cd8fdf666a96d70d0bea2dfeb4842d7f35f6ff25e207148ac/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/28432a0e3a66258472248f774d29198a74c1da28098d5f3cfb5154c4034352ab/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/7e976815ec2e6d0fb873abe1e8bec1b6264cc147af2eaaf4750fd4c69939225c/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/f59a114ba32226cc5d7967c739a6da99ac35d40070f3bab5d47a001676786341/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/207afecbf59754bae45fe95195fffe73ab3db4eafc28088883a778966974580d/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/fba972865f2d4901301ece028eb5c81c3a75ead4af5066e6cf3239cd593ff1c2/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/9c59c3ace7b5cbf6372587c09b6c012688d4d8105e6b23678637f919c1489f2e/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/e3274aeaa8728b31b5e9dde002e8e9f61d26a85e81270d0625b7cced98464142/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/bdc2fafa08450eaabd1986ad550fe936bedd3a0f5ae975278a274e404b30c66b/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/ffa8038e9092a26166e6cda083fd6a72b91df6c75d38792c96bd5e0b5e354c66/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/5f19b8fdb6e5d95be3c7a749a7802592f190e6d6f25913a1bb0995915044f26d/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/ac990dd8e3b2d1649ba21a8ec5c88a701ac8b49fd9ca339647913aae7f8d4130/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/7296296468fde04a9385bda606b4bd92d2d6f6c139f9f9f4f7f7b575d9bc0b4b/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/419dc3d671a1d3b5bf09b3e570857254172dbb98ae1df864ae9f492b443aa07c/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/c5d507b38dfd1a4017596a6d2d8001abfe7c9acbe9d5fcb4a07aecb0d0a0b7b4/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/b227cc6c26ef5600c483251e4ae89ac0fc50843f8e9288695e5166d35ef5d15f/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-375:{mountpoint:/var/lib/containers/storage/overlay/9f28b7ea40762bd4e3eca11184d2cf6f6606be6c64a8013e7472edfcc505585a/merged major:0 minor:375 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/14daa43b386e6d6e634c706acd564a799c1cba55013835a493d43d9ecb7fffb1/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/2a3050433ed9cd235eb817368cf3545aff45037b1733ccbafea0be88375ac82f/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/61f57811476d12949657e6c97b442c943a07e9ea6e96c2736f92119b8e017112/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/7a220a69ebf888f282adbe3174b0cc71e359a5d46e60293cbe506021c82dc5f5/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/e5d2ad0259a9552115386da54acb2eafdeda7a299ac7a9a5560bbdcb196c2217/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/ba26059d0b313e2fa65367deb622d187a9175d8b5c090c50bea84c3d722dd03e/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/b67212edf3c0adf03e96ccb7044f2a5c3bfa5853d207bd5666128b83c32b0c66/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/d4fdfc2967cab5e1e714c2aa2af26790b12fcdbc6d5f934511eac04d87da3ee0/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/f6837196364fa89591c7137eae9529b4367339d3323263690dca0da6efadb05b/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/354dd8920a964a6c3c98e593dda86053aa7948ffc39873d4d713ba5d7ffa15e4/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-430:{mountpoint:/var/lib/containers/storage/overlay/68c6a0d7355b3db413d24fff95bbf3f79f95efb4a25c739373dfaec65e56ca13/merged major:0 minor:430 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/7ca829d14c115b13a1be58cb7ad40101767f44874cd2edd49e587900d6e8afe1/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/4493d570e1c64766453de98d553deb81139b0011c9c402b54536ca0a4d92f57d/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-448:{mountpoint:/var/lib/containers/storage/overlay/3bd604d2a6fb683042e00080aecd68bb1e3b5ba786b3b92e6fc520acc7445582/merged major:0 minor:448 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/a1ee7afe4049033e78b71750fe8be984ff7f6b4c10501b120f20cf8d5bf91908/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/cbcb80298f81f5eb11241ef59a981f4adbe8801cd0081feebf77cd6a067531c6/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/20a72df134752f0f10c140faeef9e45e12b648eb1d898f8f5493d62c028a4df4/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-455:{mountpoint:/var/lib/containers/storage/overlay/0721104ae3601574bffd79e588902cd1c16875f3563061e74c7994637a31dc98/merged major:0 minor:455 fsType:overlay blockSize:0} overlay_0-459:{mountpoint:/var/lib/containers/storage/overlay/69405a2774484abc86ab41a2621ebef3b71fd344b0cae22d5176db3aca884c94/merged major:0 minor:459 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/8a363d1627009dc23796342eb7c8e939e0602f49f577e057d972128bc803d253/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/deb040557aed8940eeb791cc4b899a18c7506db84f5336b530c83462702ceec1/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/1a59546d1e67af914dee415c9f94796ee4f62f35bd3087ff6e2ca8b7bc672d2f/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-47:{mountpoint:/var/lib/containers/storage/overlay/94427e08b6a5bdef537bb166a5c537789b8c5bd17762befa12876208e0f4ae20/merged major:0 minor:47 fsType:overlay blockSize:0} overlay_0-476:{mountpoint:/var/lib/containers/storage/overlay/9be95df25c9849775da07b27cb7599d07240b8c63409a0d973c25958349dd8e8/merged major:0 minor:476 fsType:overlay blockSize:0} overlay_0-477:{mountpoint:/var/lib/containers/storage/overlay/c20a5e6fc650e83c1bea95469e0d047c0a6cd3a0e5e8a01695147484fdda59d0/merged major:0 minor:477 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/c9281b01522b560e96e406cb52d146d0bef5a2d5afa272f1a0f2fc56cdbae18b/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-484:{mountpoint:/var/lib/containers/storage/overlay/cc55ba5e6063fdc3a140010a6dc4ed81ec59de248c0e9a6572a42941aca8d1fd/merged major:0 minor:484 fsType:overlay blockSize:0} overlay_0-489:{mountpoint:/var/lib/containers/storage/overlay/8d3f8f508ae158ef30b2a70e7f4d64ce4cc12932aa83619f3a72baa3464f1444/merged major:0 minor:489 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/bee45995476a7b1c4bc46412aa1182f7f9a5b3a5c6e70dabdc3c18dc4403d99d/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-496:{mountpoint:/var/lib/containers/storage/overlay/0af27a034a2fad6fafdd05382acb0d806a6d5bb3cf7153beff6ff3781581f9be/merged major:0 minor:496 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/b8bd6fac1caec0165963c129823dca5b87298ee5eb8a83fe8218fa4af074d0fa/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-501:{mountpoint:/var/lib/containers/storage/overlay/96348da81aecb74db9f1d236fd5fbd687750563f6fce89eec1ec9b96f9db6a6e/merged major:0 minor:501 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/131495ee7f57c40762059100dbee98aa5ffd01ccc33643ff3e53ffc68660c779/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-507:{mountpoint:/var/lib/containers/storage/overlay/038741817da251abe398feb01026e31d5538db4214c0cb5d49534f67d7d9cce3/merged major:0 minor:507 fsType:overlay blockSize:0} overlay_0-510:{mountpoint:/var/lib/containers/storage/overlay/a812937c000c0f1eafaf7311d52aa997dbdea38ea0f86d356e2cd96cd889560f/merged major:0 minor:510 fsType:overlay blockSize:0} overlay_0-515:{mountpoint:/var/lib/containers/storage/overlay/ac8d8dc51f330586dd287fc83ef5f9f374592f0855cf626cc85f75c20befa60f/merged major:0 minor:515 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/ab873e434c5dbc25e12923f6f0d779a921c61c242fcc7468098cd255fb1ddaf4/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-53:{mountpoint:/var/lib/containers/storage/overlay/2a98dedd433369d2e3b6a17f3eeaf4975cffdaa1decfeb9718726ee116f7b9f6/merged major:0 minor:53 fsType:overlay blockSize:0} overlay_0-536:{mountpoint:/var/lib/containers/storage/overlay/9a9d640c5ae5a89de7299d369655ac12cf19c8648f100f1e0f783f5b1f32e53d/merged major:0 minor:536 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/3363229e109820be943a9e2aaa3b73375ee0f48ff0021e30fe96476f0d3a16f0/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-549:{mountpoint:/var/lib/containers/storage/overlay/1026c858cdcc5f740f973d1a8a5d54423ebf13e28eb51d2f4d207a23510f4ab4/merged major:0 minor:549 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/f9f0533d8c4ceae0c101f0bb847ae708c2c1fc2f558ec193901d1ebc4bea52ed/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/64e535ba8a942c3d51cf15197599211bb107a29f256e6333074198f6d0199b0f/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-559:{mountpoint:/var/lib/containers/storage/overlay/97370701da6aad8b18dc8d6fea6fb9431e7b21cef9a2a45e6a927684713a88d9/merged major:0 minor:559 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/319dfbfaeab2ace5640de4d398909ea9f70264a7892ea9f261407c55527872ab/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-583:{mountpoint:/var/lib/containers/storage/overlay/e4b6d1841027af6a2cea53c8d2d0555a923103de028d437d6942fff08a6d7cb1/merged major:0 minor:583 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/d8869e08e290351bb9a728e83de0102809e165082ee0ef339c6fbef8b246ac03/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-590:{mountpoint:/var/lib/containers/storage/overlay/8af1e383600f909740d7b3137710fec93f88445ae854362f16e5e2a13174db4f/merged major:0 minor:590 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/5f4dfe79dd38955c8f9789106129f14fe7c5ec6dbf668590d5b46bbbe742fe95/merged major:0 minor:591 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/1870d0237df1f08e92db0270135a79888ec3dc901232ac6c49e0c8fd081568ca/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/a9930f0d9dab73edfe2b3cf27d79342478994e379e3909a355fc52c9710bffba/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/ff0b0f6f8f9ff2c13fd828ad0fb0db48e0e2c0befaee1c1d93e7bc0512498ba6/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/edec43e4c520c207ee30f6fe29bb62febeeeeff949669e2cdd4823d9f0a3f1d3/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/6d859937f958fb3f172439dc2c9063aeb072d56381755da0702c88d9f23aa551/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/79d63d5f518ec31fad264385e4bafc6772c9f2ce1012fa85144281ed08791507/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-644:{mountpoint:/var/lib/containers/storage/overlay/31bd55bcf194a1c2cbe104a139f629c41199a267c8eca116ab0930c3b494634a/merged major:0 minor:644 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/eb393963a5e6a841e3ebb6993770776b2db88403ca5461cb7a27783340425bee/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-654:{mountpoint:/var/lib/containers/storage/overlay/3dba3d12700271a25ef8aa5d6a6ba4d853d72682fb5279b42cf73856ff09df29/merged major:0 minor:654 fsType:overlay blockSize:0} overlay_0-655:{mountpoint:/var/lib/containers/storage/overlay/3863315271de18c46a89fb73a3a1f1a53ba806b421f64ad8a0168a74db3c94f1/merged major:0 minor:655 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/8a735094af92bace242461919ee06c71b98f42e23e3c2d5e53a74cfc83675226/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/926ace7a3b6a577a7d3cf83ce8c9001ce0dcf041090c6782c28046510f52c5f1/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/cf29da99b1ed682a6352489182b0409e8c8507a78043895e9b99530a305d94b5/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/473d51963236d3910796ece47f740d13f9704a9ba81a569e9013777670dcb9aa/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/41dec2978631776e4c566462fa06e4e8cf721c8efa64839353399f84f1de37ae/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-672:{mountpoint:/var/lib/containers/storage/overlay/5357e1f06d5842d41061691d8b21eb83fd3e5ee5934c4a96c5c4c684590a9200/merged major:0 minor:672 fsType:overlay blockSize:0} overlay_0-675:{mountpoint:/var/lib/containers/storage/overlay/b4bfce77bce77a77df0e6bc82754b823171f3da117f8447044957ea59c0cfc82/merged major:0 minor:675 fsType:overlay blockSize:0} overlay_0-676:{mountpoint:/var/lib/containers/storage/overlay/7050653010dd122f187fefa662f3b0ab6e2e5c3885aedc41bf4343863348a1e5/merged major:0 minor:676 fsType:overlay blockSize:0} overlay_0-678:{mountpoint:/var/lib/containers/storage/overlay/a00192d1bf5fbaa7d9d6689e85d2c7f3cf5800380a62df00c84a29e4f99151cc/merged major:0 minor:678 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/59707d3dd9b53241312cea7649467570ba8bb6be56668c1910994d9eb0e018a7/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/5c2b5cf1c31c8baf50e5074565c84c644d53375e8aa2658a924a9b05a37eb03d/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-714:{mountpoint:/var/lib/containers/storage/overlay/3857c2cc162b237b9eb3a26728ead1cf91fd6a1cbf820f5aa5c1468e289fb212/merged major:0 minor:714 fsType:overlay blockSize:0} overlay_0-719:{mountpoint:/var/lib/containers/storage/overlay/817023f18fef8d42e445c72119e0f515ec9b9fe0ac3b60dd938d3435d774c68e/merged major:0 minor:719 fsType:overlay blockSize:0} overlay_0-722:{mountpoint:/var/lib/containers/storage/overlay/f2ae6bf168bd46deb537d2cceb49bae6b54ebec6e36553775a428aa6bd9251ed/merged major:0 minor:722 fsType:overlay blockSize:0} overlay_0-724:{mountpoint:/var/lib/containers/storage/overlay/a9096f57a09a8a5680359e923c20390beeecb31211eacd10279172de4447c801/merged major:0 minor:724 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/a45c2fb826b04d68704cade32c48d5cd8cd3846c1f88b047da93e4dcc3fc318c/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-734:{mountpoint:/var/lib/containers/storage/overlay/7e572b786afe60f139cdd7f50a191f4e4ad4e50fcf148bc967f1bbbb227d6f34/merged major:0 minor:734 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/4932f3d93119dad98c409cf1f5d4237b0e630a32ef1e336acfd9603b486fd1a3/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/6938db080704221d98f41d1f91bdfc9e043b615803efa3fc9837a01660668743/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-764:{mountpoint:/var/lib/containers/storage/overlay/c26d89cdb585c8a0b842976019c9d4ec6c8db4397dfada556795741fea78e69f/merged major:0 minor:764 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/189bb88259c7983105d3d79e580d04c2d0143af04f82f9df2d6699ac1dc934ac/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-780:{mountpoint:/var/lib/containers/storage/overlay/5aebd4575c64a2571fa05d6179dab975050562018d768aff0c07ce756c25e030/merged major:0 minor:780 fsType:overlay blockSize:0} overlay_0-784:{mountpoint:/var/lib/containers/storage/overlay/cf6e6fd452c77f268f9888a9142bc9f118fee343373ac485a01d95705432021d/merged major:0 minor:784 fsType:overlay blockSize:0} overlay_0-786:{mountpoint:/var/lib/containers/storage/overlay/d78754b2b7eb8e6ae436370f88e828fb42a3e24262ea62606622fa209a54a998/merged major:0 minor:786 fsType:overlay blockSize:0} overlay_0-804:{mountpoint:/var/lib/containers/storage/overlay/0f4a361a1d68b3081cde1a345c4802150d64f7b8c73792f5a1d1732565f89587/merged major:0 minor:804 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/ead5c4d3b77a14bb181baafd03d9802e0315767a7dd120d772b61eefd3dbcb71/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-827:{mountpoint:/var/lib/containers/storage/overlay/d84bd878decc2e81925d425413d8bf758bc78146be55d29371e0b1b55f0bd71e/merged major:0 minor:827 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/e6163acc240e6006e34e91204249d09b338bc0b00f8db48b56d5a3c876cb4e39/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-849:{mountpoint:/var/lib/containers/storage/overlay/7099dd0caae5d4246b4cc8331b341ce1092921c8dade9fd68b2ff529164f3334/merged major:0 minor:849 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/2156fd75b2cd72151db86b8df0b24ffdf52d539ce7c3aa2f48da5865eca01f83/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-851:{mountpoint:/var/lib/containers/storage/overlay/8e212dac6b50535e7e11613e896fd02ba65fc6c756b813881b3b733e6f7094e1/merged major:0 minor:851 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/33bf07b9b8f805d960f9d6913859490a87fc02691fee67096c2b946f1eb6c5e2/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/81d9dd1290cd5e5ebd911233ece487a13629a58c17558e4a906ea9315b233dcc/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-861:{mountpoint:/var/lib/containers/storage/overlay/e8a7db49c1d0cd5a5b4f2a1b7aef185b199ac8a53ee1b1ca5a0764f27b325d83/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/98786b035ed9a6a76aae3ff42d643d6a614be8f4c426dca6529de27842086ca5/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-869:{mountpoint:/var/lib/containers/storage/overlay/e81bb01f45fd14075f362420aff7ae714a3d1d9bee94fc9f5c7219c58ed78ec2/merged major:0 minor:869 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/8a52052d6256073ee4b5fc40e24c4395421d3d5db91eff4e4c460ae8b8b74d60/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/340959fda816495ccdf0fd06bc273b74a18ffe4fecb39ee89759ca88e95a0cab/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/8e34ca664a6d39ae035e7db627af99d328e3579710c431d5fff68d7a2fc99a06/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/d5e33606976eb1472f3b99f5c71ec8514f2982ff80adf75375268210140f3de3/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-884:{mountpoint:/var/lib/containers/storage/overlay/cfe899b7f2c6f72c3216754ef783bf55f3490532a0512fc553bf5f690e9f792b/merged majo Mar 12 21:08:59.065028 master-0 kubenswrapper[31456]: r:0 minor:884 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/41ba7b832406bfabfbab6daacd9841f765d8c05c958ba533a3332b35c6e0bd6f/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/3d8383855aee2c254330832f814f533a685b221013178811e4913064359ea625/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-900:{mountpoint:/var/lib/containers/storage/overlay/975fc8fdd441ca5fd74039e58cf59a3952217fb22a3052c498053653e9812714/merged major:0 minor:900 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/578453000de1ee2f935135134071d94bb11d3dfb02e4aa6f2b9b42497dd83ba3/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/333d4095cef75ff69d57a0b36e5355541c1214b07d2536c25a634fd5aae9f922/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-910:{mountpoint:/var/lib/containers/storage/overlay/ff8aba6d5fdf0781da3f355183752beb50add58d96937e31d12c1df11a9f198b/merged major:0 minor:910 fsType:overlay blockSize:0} overlay_0-916:{mountpoint:/var/lib/containers/storage/overlay/ed72eeefc82e75e251366da2eb1ff7781143c11045a4c26365c2768e93f14c77/merged major:0 minor:916 fsType:overlay blockSize:0} overlay_0-918:{mountpoint:/var/lib/containers/storage/overlay/ad429e44291c2f7357f24c353f6dd8a105282a4b2aa58ae7b290e2a8394cbe15/merged major:0 minor:918 fsType:overlay blockSize:0} overlay_0-925:{mountpoint:/var/lib/containers/storage/overlay/8b334255460e8a7d5a59c01ce3f9e3d7b0a61c9b5b20a3210e4e84f92d228577/merged major:0 minor:925 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/eb7dbf039c907483773d43e1f8568273f8a8f139711107e2300ca4d71e1cdee8/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-930:{mountpoint:/var/lib/containers/storage/overlay/004fd23b551fe8bd3fc033e9c11f784db22d5cfc617afcc41d5ddb7a7657a4d6/merged major:0 minor:930 fsType:overlay blockSize:0} overlay_0-934:{mountpoint:/var/lib/containers/storage/overlay/cc14c63dc00ce2072d546c9b8a0e189a7588ce410a1da62a411f464072c90b3b/merged major:0 minor:934 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/adf409d76516ce4154b938a1cd9fa6d1efd0dd4858faea523f972ba9de27ee42/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-950:{mountpoint:/var/lib/containers/storage/overlay/af08551a8a2ebc8ea00541d4295268c76fc2e0610c9f18601a3478dd8eb48712/merged major:0 minor:950 fsType:overlay blockSize:0} overlay_0-967:{mountpoint:/var/lib/containers/storage/overlay/e474a1d0b7e585f16ca540cb8a7c4a537a5604cd73c9e2c4b457500d0a4363e4/merged major:0 minor:967 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/5c9c6d65b7720536e7a06a3d02c5b303857463a533956243eec10fa9fded43be/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-976:{mountpoint:/var/lib/containers/storage/overlay/f71017e809d146f1c3d41957393d00c9aabb4e00d6df4824d8649a0d638cc5a7/merged major:0 minor:976 fsType:overlay blockSize:0} overlay_0-979:{mountpoint:/var/lib/containers/storage/overlay/d345a569109b06127957ead75b46e70f7da97ea039b20f6437675a456bd6d093/merged major:0 minor:979 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/7450dc66a963373376c75c99bfef6a45c43f9c5549ab77d0ff71a66c5abb510b/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-997:{mountpoint:/var/lib/containers/storage/overlay/45c55051e449f6e468ce6c8e81e732da247fee5985ab9eb0e1f2610456891b96/merged major:0 minor:997 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/d38a85366e0d3b2018b64735bb80bbeb37b9fb500d049beba20a3ad71d2d2e7f/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 12 21:08:59.116287 master-0 kubenswrapper[31456]: I0312 21:08:59.113586 31456 manager.go:217] Machine: {Timestamp:2026-03-12 21:08:59.112872663 +0000 UTC m=+0.187478021 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ab6ae3a9e07f4bbcb7f4f9a490c6dc9c SystemUUID:ab6ae3a9-e07f-4bbc-b7f4-f9a490c6dc9c BootID:a78965b5-30ee-4294-b02c-530634422611 Filesystems:[{Device:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/12893a728732446f94ca8814579a35744128ccd4319c3c765ac2be173f953384/userdata/shm DeviceMajor:0 DeviceMinor:773 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1087 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~projected/kube-api-access-z9xld DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~projected/kube-api-access-q78vj DeviceMajor:0 DeviceMinor:250 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6919d90a2e2669ba0985487b4cab45d215f7a919ba3e052db5e778a615204f87/userdata/shm DeviceMajor:0 DeviceMinor:424 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a2cd6729990b276c87e661d147e85e91d6d87584a9d3a473b3bb2dc19de5c406/userdata/shm DeviceMajor:0 DeviceMinor:1014 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-448 DeviceMajor:0 DeviceMinor:448 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~projected/kube-api-access-2wt5q DeviceMajor:0 DeviceMinor:264 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce/volumes/kubernetes.io~projected/kube-api-access-vcmzz DeviceMajor:0 DeviceMinor:576 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-489 DeviceMajor:0 DeviceMinor:489 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~projected/kube-api-access-577p4 DeviceMajor:0 DeviceMinor:257 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d3cc45d111f33e3f3fcc00ad24e6a827694e4469e606ceb048673100ef08c81/userdata/shm DeviceMajor:0 DeviceMinor:383 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fafb7230532430a0db8a7bc3a9035465334c92f98efee0c32c29c3f4d6ecbcfd/userdata/shm DeviceMajor:0 DeviceMinor:378 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1074 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1178 DeviceMajor:0 DeviceMinor:1178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-583 DeviceMajor:0 DeviceMinor:583 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-930 DeviceMajor:0 DeviceMinor:930 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/97b35cbaeb5726da86bcc4b7893b21ef73fbc6ccdec24f0c3f1962ec85e18df4/userdata/shm DeviceMajor:0 DeviceMinor:288 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b96dd10-18a0-49f8-b488-63fc2b23da39/volumes/kubernetes.io~projected/kube-api-access-nhhdz DeviceMajor:0 DeviceMinor:538 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1075 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:462 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/edf68201b8db3425cf21f5fe04a38b1fb9194e82ba3d64c623597064ff3f5fa4/userdata/shm DeviceMajor:0 DeviceMinor:777 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1011 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1079 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~projected/kube-api-access-7gg7v DeviceMajor:0 DeviceMinor:1080 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1144 DeviceMajor:0 DeviceMinor:1144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7623a5c6-47a9-4b75-bda8-c0a2d7c67272/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ce789d8b3134f292701ad6a9879595b336f1a9ddf70665a346e7b380d821900d/userdata/shm DeviceMajor:0 DeviceMinor:619 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1182 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc2a01a11374dd8c2befdb90180bc8b98e8fb814dfdade15e6058739f337ecd2/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:268 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0f3550a8aec9a486ca0cee3183a0d557f3a6f7dd69b026fe601996e8ee871591/userdata/shm DeviceMajor:0 DeviceMinor:834 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:810 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5615eeaf32fd2c079e657b23ae7216d539735aa3d68b4892382d2e003032d83/userdata/shm DeviceMajor:0 DeviceMinor:235 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:463 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8792e1c546b62b1a483dc750f90553c923da596394a484fb6a82db67b2323633/userdata/shm DeviceMajor:0 DeviceMinor:581 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-780 DeviceMajor:0 DeviceMinor:780 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1088 DeviceMajor:0 DeviceMinor:1088 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d35f6aa2489bfe5ece464bdc50b627c81cafeea69d0bf73d6d68ef8609126cf5/userdata/shm DeviceMajor:0 DeviceMinor:587 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-476 DeviceMajor:0 DeviceMinor:476 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:441 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dceda9f22432bfb30ffe8ed6d05ecae6347a12a0c13f74fa12350cf55152eae6/userdata/shm DeviceMajor:0 DeviceMinor:363 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8/volumes/kubernetes.io~projected/kube-api-access-7bk7q DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:440 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b71376ea-e248-48fc-b2c4-1de7236ddd31/volumes/kubernetes.io~projected/kube-api-access-nlrzs DeviceMajor:0 DeviceMinor:838 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d9152bd6-f203-469b-97fa-db274e43b40c/volumes/kubernetes.io~projected/kube-api-access-q9txs DeviceMajor:0 DeviceMinor:913 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~projected/kube-api-access-bhcsd DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35cbca359bb8cc6540d875e41fda798cb28c0b21e42a0439c798f577e385a0d1/userdata/shm DeviceMajor:0 DeviceMinor:765 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-47 DeviceMajor:0 DeviceMinor:47 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c3da632c5f18897e9ef4fc639ad267aa15c88d97788e82ab67a1bdff6b3ccb6/userdata/shm DeviceMajor:0 DeviceMinor:539 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4b0dd69b886e5f463ddbfe21af30a9ab10c6d6220d953b37096923c42ae0c57/userdata/shm DeviceMajor:0 DeviceMinor:844 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1137 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa41b0d7c32641cd054893d0403c77199788601eccf56bdc2a5e82822618fbea/userdata/shm DeviceMajor:0 DeviceMinor:415 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-786 DeviceMajor:0 DeviceMinor:786 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72/volumes/kubernetes.io~projected/kube-api-access-2rfn6 DeviceMajor:0 DeviceMinor:992 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~projected/kube-api-access-hlt7h DeviceMajor:0 DeviceMinor:534 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/05fd1378-3935-4caf-96c5-17cf7e29417f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:812 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2604b035-853c-42b7-a562-07d46178868a/volumes/kubernetes.io~projected/kube-api-access-clp9l DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab3264a789b92ca41d23ea4b05704ed36eafff91e5d534902cad1c3bfa2f9b9e/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:605 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1128 DeviceMajor:0 DeviceMinor:1128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~projected/kube-api-access-258hz DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~projected/kube-api-access-kzwrw DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a1961e84ee3c3ec3f1933eb0bcae9c2d6f72599a10fb64dc194d15bf1b838126/userdata/shm DeviceMajor:0 DeviceMinor:614 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~projected/kube-api-access-tm7d5 DeviceMajor:0 DeviceMinor:787 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-484 DeviceMajor:0 DeviceMinor:484 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:83 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b7229c42-b6bc-4ea9-946c-71a4117f53e9/volumes/kubernetes.io~projected/kube-api-access-xx5m2 DeviceMajor:0 DeviceMinor:494 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f32413943fd7e46b94ba71c016cbccc87f018a39f90dbf119089416f4d147bd9/userdata/shm DeviceMajor:0 DeviceMinor:769 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662/userdata/shm DeviceMajor:0 DeviceMinor:1196 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-655 DeviceMajor:0 DeviceMinor:655 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/400a13b5-c489-4beb-af33-94e635b86148/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:897 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~projected/kube-api-access-2w68c DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/855747e5-d9b4-4eef-8bc4-425d6a8e95c7/volumes/kubernetes.io~projected/kube-api-access-6j7lq DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:251 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~projected/kube-api-access-x8hp5 DeviceMajor:0 DeviceMinor:533 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/96bd86df-2101-47f5-844b-1332261c66f1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-430 DeviceMajor:0 DeviceMinor:430 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7f3afe47-c537-420c-b5be-1cad612e119d/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:756 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-724 DeviceMajor:0 DeviceMinor:724 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-976 DeviceMajor:0 DeviceMinor:976 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70e54b24-bf9d-42a8-b012-c7b073c6f6a6/volumes/kubernetes.io~projected/kube-api-access-mfsvw DeviceMajor:0 DeviceMinor:94 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:256 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/volumes/kubernetes.io~projected/kube-api-access-8ddw4 DeviceMajor:0 DeviceMinor:791 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07542516-49c8-4e20-9b97-798fbff850a5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b6f3e501ba06ed994745a6acdc066748befa97da97704898903460cb6ea2f103/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:426 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:782 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1083 DeviceMajor:0 DeviceMinor:1083 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc/volumes/kubernetes.io~projected/kube-api-access-n555w DeviceMajor:0 DeviceMinor:768 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~projected/kube-api-access-mp84p DeviceMajor:0 DeviceMinor:1077 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~projected/kube-api-access-c5c6t DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f8467055-c9c9-4485-bb60-9a79e8b91268/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:480 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/334e8afc68a931f6350a0d282fa03b4333bfc31875bef1101770c4d5b423d760/userdata/shm DeviceMajor:0 DeviceMinor:373 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:436 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1052 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1202 DeviceMajor:0 DeviceMinor:1202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-678 DeviceMajor:0 DeviceMinor:678 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ad63582-bd60-41a1-9622-ee73ccf8a5e8/volumes/kubernetes.io~projected/kube-api-access-csxwl DeviceMajor:0 DeviceMinor:317 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1064 DeviceMajor:0 DeviceMinor:1064 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-997 DeviceMajor:0 DeviceMinor:997 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-916 DeviceMajor:0 DeviceMinor:916 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f8467055-c9c9-4485-bb60-9a79e8b91268/volumes/kubernetes.io~projected/kube-api-access-gp4mt DeviceMajor:0 DeviceMinor:767 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d50dfd713474f3f9326230f15b9aa86b517e198f4cbc3bcfca21ce09a517313c/userdata/shm DeviceMajor:0 DeviceMinor:1081 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-900 DeviceMajor:0 DeviceMinor:900 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/volumes/kubernetes.io~projected/kube-api-access-mfspc DeviceMajor:0 DeviceMinor:377 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1096 DeviceMajor:0 DeviceMinor:1096 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-375 DeviceMajor:0 DeviceMinor:375 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2367b2036b6ee449144934121f0846ae9e3677f2ee334526852b810631391c36/userdata/shm DeviceMajor:0 DeviceMinor:620 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~projected/kube-api-access-lrm2z DeviceMajor:0 DeviceMinor:837 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes/kubernetes.io~projected/kube-api-access-nbcts DeviceMajor:0 DeviceMinor:1195 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8436e30f10a58f1975835cc423f1f4b55df282dbfa2eb60a4b2dbe459e6cb442/userdata/shm DeviceMajor:0 DeviceMinor:612 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/12fa39eea6eac82ab52e3e2f0cc03926c83f1f0666197d18963fd6a4f403e0a3/userdata/shm DeviceMajor:0 DeviceMinor:898 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc93b3cd44963703c77eaa6364e36c15a950d185dbccf5b3377bd9dda6a701b9/userdata/shm DeviceMajor:0 DeviceMinor:1058 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4ebc9ee1-3913-4112-bb3f-c79f2c08032b/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1078 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1138 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dbdf068459da915aaa15b95a36d6ccf7790078f4c1daee68e40bbaf77ad0787e/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/83368183-0368-44b1-9387-eed32b211988/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:580 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-53 DeviceMajor:0 DeviceMinor:53 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/980191fe-c62c-4b9e-879c-38fa8ce0a58b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/617f0f9c-50d5-4214-b30f-5110fd4399ec/volumes/kubernetes.io~projected/kube-api-access-f2r2r DeviceMajor:0 DeviceMinor:252 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-507 DeviceMajor:0 DeviceMinor:507 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7667a111-e744-47b2-9603-3864347dc738/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1070 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/67e68ff0-f54d-4973-bbe7-ed43ce542bc0/volumes/kubernetes.io~projected/kube-api-access-tpf99 DeviceMajor:0 DeviceMinor:820 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f2fe9b256b0661c08a4a3ada19e5a95335c69cff21bdc38412e044b0f329672/userdata/shm DeviceMajor:0 DeviceMinor:1020 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354/userdata/shm DeviceMajor:0 DeviceMinor:720 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~projected/kube-api-access-2lltk DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8a8fe5d5bb4822dd7daf58bc0b49057e47a6aa6fcd9e303e14168c98652cb42/userdata/shm DeviceMajor:0 DeviceMinor:841 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a3bebf49-1d92-4353-b84c-91ed86b7bb94/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/784599a3-a2ac-46ac-a4b7-9439704646cc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:255 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/61b0f018a3d165e925dd9889884b291a368122b4453e40fac0dc068c3a518630/userdata/shm DeviceMajor:0 DeviceMinor:382 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1009 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1060 DeviceMajor:0 DeviceMinor:1060 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-676 DeviceMajor:0 DeviceMinor:676 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-719 DeviceMajor:0 DeviceMinor:719 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/85f9c6fdf5bd5b95a4e9ca273a39f24bdd11f231f86bdf7cf1f6b3ef19542031/userdata/shm DeviceMajor:0 DeviceMinor:328 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-849 DeviceMajor:0 DeviceMinor:849 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6/userdata/shm DeviceMajor:0 DeviceMinor:239 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e/userdata/shm DeviceMajor:0 DeviceMinor:633 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e75e7b353307791eba0dce2c76a1443a45ff7401d92e0d636bcfdc09677d8a67/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:410 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1033 DeviceMajor:0 DeviceMinor:1033 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/83368183-0368-44b1-9387-eed32b211988/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:579 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/305e45867f0f5c512d8dca3c39de15088c17eab90b2969aafd739643c4b112ce/userdata/shm DeviceMajor:0 DeviceMinor:93 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/15ebfbd8-0782-431a-88a3-83af328498d2/volumes/kubernetes.io~projected/kube-api-access-mbbc5 DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9fe52a43f1e5ba1f28f24b6e5dc055fff1fcd846370585df5e4104b5c4279d2e/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-672 DeviceMajor:0 DeviceMinor:672 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-764 DeviceMajor:0 DeviceMinor:764 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cc7b96ab-01af-442a-8eda-fc59e665a367/volumes/kubernetes.io~projected/kube-api-access-vwqbt DeviceMajor:0 DeviceMinor:1013 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~projected/kube-api-access-f7rrv DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:532 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4c589179-0df4-4fe8-bfdd-965c3e7652c5/volumes/kubernetes.io~projected/kube-api-access-pbqfz DeviceMajor:0 DeviceMinor:772 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/32050f14-1939-41bf-a824-22016b90c189/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:402 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/32050f14-1939-41bf-a824-22016b90c189/volumes/kubernetes.io~projected/kube-api-access-pbnbs DeviceMajor:0 DeviceMinor:403 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/02649264-040a-41a6-9a41-8bf6416c68ff/volumes/kubernetes.io~projected/kube-api-access-k5v9f DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5471994f-769e-4124-b7d0-01f5358fc18f/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c4103685c4d0722261aeabd4bc116d1842263bbc5e10dfb2b17ca8f9a32f7e85/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/05fd1378-3935-4caf-96c5-17cf7e29417f/volumes/kubernetes.io~projected/kube-api-access-8xxkr DeviceMajor:0 DeviceMinor:826 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1057 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/135ec6f3-fbc0-4840-a4b1-c1124c705161/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:384 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/31747c5d-7e29-4a74-b8d5-3d8efa5e900b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:578 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/508cb83e-6f25-4235-8c56-b25b762ebcad/volumes/kubernetes.io~projected/kube-api-access-s4jzt DeviceMajor:0 DeviceMinor:819 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/17d2bb40-74e2-4894-a884-7018952bdf71/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:836 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd04b8d751040cd7b439f04efd47f1ce311ca66ebabc5940831335b95351810c/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b96dd10-18a0-49f8-b488-63fc2b23da39/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:535 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:421 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-675 DeviceMajor:0 DeviceMinor:675 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1198 DeviceMajor:0 DeviceMinor:1198 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5a1c27c4b2c6ff820b190b8052ccd7411bb25c93bd0787d8acd418bb486bfe0/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b71f537-1cc2-4645-8e50-23941635457c/volumes/kubernetes.io~projected/kube-api-access-8vvf6 DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:265 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-918 DeviceMajor:0 DeviceMinor:918 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1048 DeviceMajor:0 DeviceMinor:1048 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-510 DeviceMajor:0 DeviceMinor:510 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b851c1c34b6e9c4cbd3df824f0b5a05e417c5cb1b92ad2b7f01061d2a5c5d6b3/userdata/shm DeviceMajor:0 DeviceMinor:543 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7f3afe47-c537-420c-b5be-1cad612e119d/volumes/kubernetes.io~projected/kube-api-access-8745n DeviceMajor:0 DeviceMinor:763 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07330030-487d-4fa6-b5c3-67607355bbba/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:599 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-734 DeviceMajor:0 DeviceMinor:734 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-590 DeviceMajor:0 DeviceMinor:590 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-477 DeviceMajor:0 DeviceMinor:477 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:762 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-884 DeviceMajor:0 DeviceMinor:884 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:530 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1112 DeviceMajor:0 DeviceMinor:1112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/46d0cbedd7c9d9c9334e86f38207707e87d2d8302b543614490d2bc6b93e5df4/userdata/shm DeviceMajor:0 DeviceMinor:839 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:318 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:69 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~projected/kube-api-access-lf28c DeviceMajor:0 DeviceMinor:1012 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f6412ec366e621f5d99b6ef5fdb5da3a73dfb0709a661b8764731c1f9e4f0f11/userdata/shm DeviceMajor:0 DeviceMinor:832 Capacity:67108864 Typ Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: e:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/400a13b5-c489-4beb-af33-94e635b86148/volumes/kubernetes.io~projected/kube-api-access-vt627 DeviceMajor:0 DeviceMinor:893 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d6eace9f-a52d-4570-a932-959538e1f2bc/volumes/kubernetes.io~projected/kube-api-access-8l8qp DeviceMajor:0 DeviceMinor:779 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64bbce37fffa0363fa6b0cb6661a450dd4f178dfa993fa7e87ca9427175696e1/userdata/shm DeviceMajor:0 DeviceMinor:843 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-934 DeviceMajor:0 DeviceMinor:934 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1000 DeviceMajor:0 DeviceMinor:1000 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc9a8ab3dbf9f510346d66800b49bfb55e672501ce824087dcdec36983ec6646/userdata/shm DeviceMajor:0 DeviceMinor:830 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1037 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-455 DeviceMajor:0 DeviceMinor:455 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:438 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/67e68ff0-f54d-4973-bbe7-ed43ce542bc0/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:811 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-722 DeviceMajor:0 DeviceMinor:722 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a3828a1d-8180-4c7b-b423-4488f7fc0b76/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1005 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52839a08-0871-44d3-9d22-b2f6b4383b99/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:529 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e624e623-6d59-444d-b548-165fa5fd2581/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:609 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17a28fbbb10b9b7c1461bf619827eeb217a3aec9b00b20b1cfd3fdd960efb363/userdata/shm DeviceMajor:0 DeviceMinor:757 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:409 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/201b5e76d89b86f520d80ea9c46f6a7725c7ca002a8f03f0377c76479fd51041/userdata/shm DeviceMajor:0 DeviceMinor:471 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1104 DeviceMajor:0 DeviceMinor:1104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/226cb3a1-984f-4410-96e6-c007131dc074/volumes/kubernetes.io~projected/kube-api-access-b9z6l DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-714 DeviceMajor:0 DeviceMinor:714 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/067fdca7-c61d-470c-8421-73e0b62df3e4/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:781 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3a6366fc7a8173b37b93da658f97b0f0f73d75e238205a99ed16b96913fe11f/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b71376ea-e248-48fc-b2c4-1de7236ddd31/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:799 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d9152bd6-f203-469b-97fa-db274e43b40c/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:909 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1114 DeviceMajor:0 DeviceMinor:1114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-925 DeviceMajor:0 DeviceMinor:925 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f3fa0bfd8e72d02ef09b3d76a758bf4cc154e7ad921d66404e7db2340d535749/userdata/shm DeviceMajor:0 DeviceMinor:814 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1098 DeviceMajor:0 DeviceMinor:1098 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40ee9bfc2fa73ad9bbc5b48cb8e7af6a3e5d2c39fc5036821437c7ea979f7a69/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9ba476328193f4cef8e964926dcec3d1d9ce3f4dd043deca9d859ee90a08d2e/userdata/shm DeviceMajor:0 DeviceMinor:617 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-804 DeviceMajor:0 DeviceMinor:804 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5/userdata/shm DeviceMajor:0 DeviceMinor:331 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ab45bc6351d4ec7baa95f91503a2501083a98d20ff063951989a4f266486d70/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e4d5da2d0ad5dc2858d68d96b482697435e191e20036d664e457ef5572ac29e/userdata/shm DeviceMajor:0 DeviceMinor:523 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1200 DeviceMajor:0 DeviceMinor:1200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6/userdata/shm DeviceMajor:0 DeviceMinor:81 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1133 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes/kubernetes.io~projected/kube-api-access-f2mk7 DeviceMajor:0 DeviceMinor:326 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-827 DeviceMajor:0 DeviceMinor:827 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-950 DeviceMajor:0 DeviceMinor:950 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-549 DeviceMajor:0 DeviceMinor:549 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-784 DeviceMajor:0 DeviceMinor:784 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-979 DeviceMajor:0 DeviceMinor:979 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1173 DeviceMajor:0 DeviceMinor:1173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1142 DeviceMajor:0 DeviceMinor:1142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d/volumes/kubernetes.io~projected/kube-api-access-qqhhz DeviceMajor:0 DeviceMinor:427 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/135ec6f3-fbc0-4840-a4b1-c1124c705161/volumes/kubernetes.io~projected/kube-api-access-wsprq DeviceMajor:0 DeviceMinor:385 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/abeff81e503300fd28292fa3a775f0ca878a822311085f8ea3036c4d769c1e10/userdata/shm DeviceMajor:0 DeviceMinor:618 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~projected/kube-api-access-dkvxh DeviceMajor:0 DeviceMinor:1045 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99/volumes/kubernetes.io~projected/kube-api-access-4l2sm DeviceMajor:0 DeviceMinor:1056 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-496 DeviceMajor:0 DeviceMinor:496 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf33c432-db42-4c6d-8ee4-f089e5bf8203/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:531 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58853bb7c55e4f38a99ccf6eb1718fea0482d914d13a64cd68997b04600a597d/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:437 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2fe791136ae6341fcef221b6feb3d2b2b4ae3ce3632fb3ef2ce720ffd2630304/userdata/shm DeviceMajor:0 DeviceMinor:420 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da40e787-dd75-4f4f-b09e-a8dab590f260/volumes/kubernetes.io~projected/kube-api-access-xg2ph DeviceMajor:0 DeviceMinor:368 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/567a9a33-1a82-4c48-b541-7e0eaae11f57/volumes/kubernetes.io~projected/kube-api-access-nzn6t DeviceMajor:0 DeviceMinor:770 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/369b6220e099e8fc73df11fb51225951b71880fdba54a4afd54d65d778f6257a/userdata/shm DeviceMajor:0 DeviceMinor:443 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-459 DeviceMajor:0 DeviceMinor:459 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc595277804629f6ce8a44c0869ea22a63cd054ea4073256f850bdf1615f38cf/userdata/shm DeviceMajor:0 DeviceMinor:1046 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918/volumes/kubernetes.io~projected/kube-api-access-xth7s DeviceMajor:0 DeviceMinor:319 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/900228dd-2d21-4759-87da-b027b0134ad8/volumes/kubernetes.io~projected/kube-api-access-rvkp7 DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d6705e-e564-4774-94b4-ef11956c67b2/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1036 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes/kubernetes.io~projected/kube-api-access-clmjl DeviceMajor:0 DeviceMinor:1139 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes/kubernetes.io~projected/kube-api-access-bcjsq DeviceMajor:0 DeviceMinor:327 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82318439026f9141cf283c68c9e568172986f95b3ac1b221e6be4eb35afea5e2/userdata/shm DeviceMajor:0 DeviceMinor:465 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/898949022ca2ee68db161a1e164f2382a1563f2d65322832aa8c78dd1630a7b1/userdata/shm DeviceMajor:0 DeviceMinor:792 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-851 DeviceMajor:0 DeviceMinor:851 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3daeefa-7842-464c-a6c9-01b44ebea477/volumes/kubernetes.io~projected/kube-api-access-jrk7w DeviceMajor:0 DeviceMinor:127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa/userdata/shm DeviceMajor:0 DeviceMinor:1140 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ed1c4da2-564b-4354-a4ec-27b801098aa5/volumes/kubernetes.io~projected/kube-api-access-2hvwg DeviceMajor:0 DeviceMinor:1076 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d862a346-ec4d-46f6-a3e2-ea8759ea0111/volumes/kubernetes.io~projected/kube-api-access-jx64q DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1390b30c39ad63783734786156383bb52543e66dbc0baed3a61e8662ecc9eb73/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-536 DeviceMajor:0 DeviceMinor:536 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54184647-6e9a-43f7-90b1-5d8815f8b1ab/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:607 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ea7954299aa7bc681bbf2b7473af9292483dacae799b21a6511a23f7d0fb2fd7/userdata/shm DeviceMajor:0 DeviceMinor:1018 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/426efd5c-69e1-43e5-835a-6e1c4ef85720/volumes/kubernetes.io~projected/kube-api-access-8rjm8 DeviceMajor:0 DeviceMinor:139 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:789 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7af2bce33483a4223279822e6e5d573080c8f741586108efbaab14ea100783b/userdata/shm DeviceMajor:0 DeviceMinor:914 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-515 DeviceMajor:0 DeviceMinor:515 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31747c5d-7e29-4a74-b8d5-3d8efa5e900b/volumes/kubernetes.io~projected/kube-api-access-l2bmh DeviceMajor:0 DeviceMinor:556 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad71740d3e827c48a8ba7f63410cca1f844bad16f5548efadd42e759d9c9b402/userdata/shm DeviceMajor:0 DeviceMinor:1085 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-559 DeviceMajor:0 DeviceMinor:559 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:608 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-644 DeviceMajor:0 DeviceMinor:644 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-967 DeviceMajor:0 DeviceMinor:967 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/508cb83e-6f25-4235-8c56-b25b762ebcad/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:813 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6/volumes/kubernetes.io~projected/kube-api-access-2kng9 DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/36bd483b-292e-4e82-99d6-daa612cd385a/volumes/kubernetes.io~projected/kube-api-access-fmcxd DeviceMajor:0 DeviceMinor:464 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-501 DeviceMajor:0 DeviceMinor:501 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90f16d8c-25b6-4827-85d9-0995e4e1ab15/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1010 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f73967ae1577400fe9f88cbace8a06fad8c0f1241e87ba67ef6053882fba199/userdata/shm DeviceMajor:0 DeviceMinor:1090 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/98d99166-c42a-4169-87e8-4209570aec50/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:606 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f36004c9ae01a89eb15126614217e75dcc8e3c3bf6df3d63d91e6a8a9b96517/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/823ddb02eb52a72270afe5bcbabb63c3bf31ccf8ea0e97a1b51cf8b0885ea699/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-654 DeviceMajor:0 DeviceMinor:654 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1022 DeviceMajor:0 DeviceMinor:1022 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c8660437-633f-4132-8a61-fe998abb493e/volumes/kubernetes.io~projected/kube-api-access-zlch7 DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82c567fab92f73cc652671757659cec0bf4fd8aeb8e6762d7ba85dd0fa1eb67e/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/480ecceaa13fbfede6f31bb888fba0e4599aa0266514be4fa32d258ea85189de/userdata/shm DeviceMajor:0 DeviceMinor:242 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-869 DeviceMajor:0 DeviceMinor:869 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:988 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-910 DeviceMajor:0 DeviceMinor:910 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0f3550a8aec9a48 MacAddress:72:46:5c:e4:d1:1d Speed:10000 Mtu:8900} {Name:12893a728732446 MacAddress:ee:d8:ac:a9:18:97 Speed:10000 Mtu:8900} {Name:1390b30c39ad637 MacAddress:da:52:5e:76:9a:8e Speed:10000 Mtu:8900} {Name:17a28fbbb10b9b7 MacAddress:56:f6:58:74:a6:1f Speed:10000 Mtu:8900} {Name:201b5e76d89b86f MacAddress:26:b3:b0:1a:13:af Speed:10000 Mtu:8900} {Name:2367b2036b6ee44 MacAddress:4e:d8:2d:bf:a4:ec Speed:10000 Mtu:8900} {Name:2ab45bc6351d4ec MacAddress:be:9d:58:25:25:da Speed:10000 Mtu:8900} {Name:2fe791136ae6341 MacAddress:a6:4f:28:fb:7a:ed Speed:10000 Mtu:8900} {Name:334e8afc68a931f MacAddress:e6:09:4c:47:f5:80 Speed:10000 Mtu:8900} {Name:35cbca359bb8cc6 MacAddress:a6:bd:37:3b:d5:e7 Speed:10000 Mtu:8900} {Name:369b6220e099e8f MacAddress:a2:ee:e0:2c:26:58 Speed:10000 Mtu:8900} {Name:3f2fe9b256b0661 MacAddress:8e:06:c7:55:dd:16 Speed:10000 Mtu:8900} {Name:41cf73b537e290a MacAddress:9e:8e:d5:4d:ce:55 Speed:10000 Mtu:8900} {Name:46d0cbedd7c9d9c MacAddress:42:cf:3f:4b:b8:6d Speed:10000 Mtu:8900} {Name:480ecceaa13fbfe MacAddress:de:75:a3:66:7b:75 Speed:10000 Mtu:8900} {Name:4c950507e89f9d5 MacAddress:32:f4:cc:d6:b8:c5 Speed:10000 Mtu:8900} {Name:58853bb7c55e4f3 MacAddress:ca:70:70:0a:a1:c4 Speed:10000 Mtu:8900} {Name:5e4d5da2d0ad5dc MacAddress:c2:14:11:a9:88:da Speed:10000 Mtu:8900} {Name:61b0f018a3d165e MacAddress:c6:18:53:12:68:56 Speed:10000 Mtu:8900} {Name:64bbce37fffa036 MacAddress:ce:32:ad:0d:a2:1c Speed:10000 Mtu:8900} {Name:6919d90a2e2669b MacAddress:3a:04:95:7a:33:95 Speed:10000 Mtu:8900} {Name:6f73967ae157740 MacAddress:6e:90:26:fe:d0:67 Speed:10000 Mtu:8900} {Name:82318439026f914 MacAddress:ee:e1:bd:02:75:4c Speed:10000 Mtu:8900} {Name:823ddb02eb52a72 MacAddress:6e:e2:5e:ac:95:7d Speed:10000 Mtu:8900} {Name:82c567fab92f73c MacAddress:a2:52:1d:d1:e2:e4 Speed:10000 Mtu:8900} {Name:8436e30f10a58f1 MacAddress:66:46:29:c9:8d:59 Speed:10000 Mtu:8900} {Name:85f9c6fdf5bd5b9 MacAddress:56:de:4d:61:8c:17 Speed:10000 Mtu:8900} {Name:898949022ca2ee6 MacAddress:8e:f6:d1:85:a1:c3 Speed:10000 Mtu:8900} {Name:97b35cbaeb5726d MacAddress:f6:0a:4b:e5:f8:15 Speed:10000 Mtu:8900} {Name:9c3da632c5f1889 MacAddress:9a:54:ac:57:30:e5 Speed:10000 Mtu:8900} {Name:9fe52a43f1e5ba1 MacAddress:ba:4f:16:b9:75:11 Speed:10000 Mtu:8900} {Name:a1961e84ee3c3ec MacAddress:5a:d4:9b:28:a4:e1 Speed:10000 Mtu:8900} {Name:a5615eeaf32fd2c MacAddress:76:cd:ed:9b:fb:c6 Speed:10000 Mtu:8900} {Name:a8a8fe5d5bb4822 MacAddress:d2:28:21:1f:1d:d4 Speed:10000 Mtu:8900} {Name:a9ba476328193f4 MacAddress:a6:f8:7e:0d:9c:e8 Speed:10000 Mtu:8900} {Name:aa41b0d7c32641c MacAddress:ca:25:ff:13:51:d0 Speed:10000 Mtu:8900} {Name:ab3264a789b92ca MacAddress:76:80:f1:c8:96:64 Speed:10000 Mtu:8900} {Name:abeff81e503300f MacAddress:be:46:34:5c:32:0e Speed:10000 Mtu:8900} {Name:ad71740d3e827c4 MacAddress:12:8a:38:27:c3:e6 Speed:10000 Mtu:8900} {Name:b6f3e501ba06ed9 MacAddress:9e:de:c2:35:2f:68 Speed:10000 Mtu:8900} {Name:b851c1c34b6e9c4 MacAddress:de:6f:69:71:1d:bf Speed:10000 Mtu:8900} {Name:b9e3c21b0a8fb44 MacAddress:4e:b2:65:41:29:4a Speed:10000 Mtu:8900} {Name:bc93b3cd4496370 MacAddress:96:1a:e5:46:1c:71 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:22:ba:f5:f1:59:96 Speed:0 Mtu:8900} {Name:c3b62ea86d8f9e5 MacAddress:8a:45:f9:f2:30:da Speed:10000 Mtu:8900} {Name:c4103685c4d0722 MacAddress:fa:56:a0:6e:22:8c Speed:10000 Mtu:8900} {Name:ce789d8b3134f29 MacAddress:b6:12:cc:02:2c:75 Speed:10000 Mtu:8900} {Name:d35f6aa2489bfe5 MacAddress:d6:6c:02:51:b6:73 Speed:10000 Mtu:8900} {Name:dc9a8ab3dbf9f51 MacAddress:02:0a:81:83:05:c2 Speed:10000 Mtu:8900} {Name:dceda9f22432bfb MacAddress:46:ab:ac:d8:0d:09 Speed:10000 Mtu:8900} {Name:ea7954299aa7bc6 MacAddress:de:d6:06:6a:8f:49 Speed:10000 Mtu:8900} {Name:edf68201b8db342 MacAddress:1a:2f:c3:8d:f3:4d Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:f6:7e:a8 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:36:1f:bb Speed:-1 Mtu:9000} {Name:f32413943fd7e46 MacAddress:2e:e8:30:62:a8:24 Speed:10000 Mtu:8900} {Name:f3a6366fc7a8173 MacAddress:0e:87:56:2a:26:eb Speed:10000 Mtu:8900} {Name:f4b0dd69b886e5f MacAddress:06:9c:c8:9a:10:ae Speed:10000 Mtu:8900} {Name:f6412ec366e621f MacAddress:0a:25:79:ce:fd:3e Speed:10000 Mtu:8900} {Name:fafb7230532430a MacAddress:1a:5e:eb:38:3c:d3 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:c6:09:84:5c:c2:5e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.114946 31456 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115003 31456 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115222 31456 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115360 31456 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115381 31456 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115550 31456 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115559 31456 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115567 31456 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115582 31456 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115613 31456 state_mem.go:36] "Initialized new in-memory state store" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115681 31456 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115731 31456 kubelet.go:418] "Attempting to sync node with API server" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115740 31456 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115752 31456 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115763 31456 kubelet.go:324] "Adding apiserver pod source" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.115776 31456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 21:08:59.116723 master-0 kubenswrapper[31456]: I0312 21:08:59.116680 31456 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.116857 31456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117116 31456 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117232 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117252 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117260 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117266 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117272 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117278 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117285 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117291 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117299 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117305 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117315 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117328 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 21:08:59.117425 master-0 kubenswrapper[31456]: I0312 21:08:59.117352 31456 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 21:08:59.118478 master-0 kubenswrapper[31456]: I0312 21:08:59.117722 31456 server.go:1280] "Started kubelet" Mar 12 21:08:59.118478 master-0 kubenswrapper[31456]: I0312 21:08:59.118101 31456 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 21:08:59.118369 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 12 21:08:59.119542 master-0 kubenswrapper[31456]: I0312 21:08:59.119063 31456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 21:08:59.119542 master-0 kubenswrapper[31456]: I0312 21:08:59.119124 31456 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 12 21:08:59.119542 master-0 kubenswrapper[31456]: I0312 21:08:59.119438 31456 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 21:08:59.126725 master-0 kubenswrapper[31456]: I0312 21:08:59.123837 31456 server.go:449] "Adding debug handlers to kubelet server" Mar 12 21:08:59.134390 master-0 kubenswrapper[31456]: E0312 21:08:59.134241 31456 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 12 21:08:59.139047 master-0 kubenswrapper[31456]: I0312 21:08:59.138979 31456 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 21:08:59.139120 master-0 kubenswrapper[31456]: I0312 21:08:59.139059 31456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 21:08:59.139151 master-0 kubenswrapper[31456]: I0312 21:08:59.139089 31456 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-13 20:40:02 +0000 UTC, rotation deadline is 2026-03-13 13:44:45.00855387 +0000 UTC Mar 12 21:08:59.139151 master-0 kubenswrapper[31456]: I0312 21:08:59.139137 31456 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h35m45.869419512s for next certificate rotation Mar 12 21:08:59.139211 master-0 kubenswrapper[31456]: I0312 21:08:59.139149 31456 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 21:08:59.139211 master-0 kubenswrapper[31456]: I0312 21:08:59.139175 31456 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 21:08:59.139547 master-0 kubenswrapper[31456]: I0312 21:08:59.139358 31456 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 12 21:08:59.139547 master-0 kubenswrapper[31456]: E0312 21:08:59.139203 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.139779 master-0 kubenswrapper[31456]: I0312 21:08:59.139755 31456 factory.go:55] Registering systemd factory Mar 12 21:08:59.139779 master-0 kubenswrapper[31456]: I0312 21:08:59.139776 31456 factory.go:221] Registration of the systemd container factory successfully Mar 12 21:08:59.140787 master-0 kubenswrapper[31456]: I0312 21:08:59.140368 31456 factory.go:153] Registering CRI-O factory Mar 12 21:08:59.140787 master-0 kubenswrapper[31456]: I0312 21:08:59.140415 31456 factory.go:221] Registration of the crio container factory successfully Mar 12 21:08:59.140787 master-0 kubenswrapper[31456]: I0312 21:08:59.140518 31456 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 21:08:59.140787 master-0 kubenswrapper[31456]: I0312 21:08:59.140681 31456 factory.go:103] Registering Raw factory Mar 12 21:08:59.140787 master-0 kubenswrapper[31456]: I0312 21:08:59.140700 31456 manager.go:1196] Started watching for new ooms in manager Mar 12 21:08:59.141857 master-0 kubenswrapper[31456]: I0312 21:08:59.141377 31456 manager.go:319] Starting recovery of all containers Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151237 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed1c4da2-564b-4354-a4ec-27b801098aa5" volumeName="kubernetes.io/projected/ed1c4da2-564b-4354-a4ec-27b801098aa5-kube-api-access-2hvwg" seLinuxMountContext="" Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151349 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8467055-c9c9-4485-bb60-9a79e8b91268" volumeName="kubernetes.io/projected/f8467055-c9c9-4485-bb60-9a79e8b91268-kube-api-access-gp4mt" seLinuxMountContext="" Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151362 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17d2bb40-74e2-4894-a884-7018952bdf71" volumeName="kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151375 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca" seLinuxMountContext="" Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151385 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31747c5d-7e29-4a74-b8d5-3d8efa5e900b" volumeName="kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls" seLinuxMountContext="" Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151395 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b50a6106-1112-4a4b-b4ae-933879e12110" volumeName="kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.151398 master-0 kubenswrapper[31456]: I0312 21:08:59.151403 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151415 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15ebfbd8-0782-431a-88a3-83af328498d2" volumeName="kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151428 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c589179-0df4-4fe8-bfdd-965c3e7652c5" volumeName="kubernetes.io/projected/4c589179-0df4-4fe8-bfdd-965c3e7652c5-kube-api-access-pbqfz" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151438 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e54b24-bf9d-42a8-b012-c7b073c6f6a6" volumeName="kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151447 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980191fe-c62c-4b9e-879c-38fa8ce0a58b" volumeName="kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151458 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="226cb3a1-984f-4410-96e6-c007131dc074" volumeName="kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151467 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e68ff0-f54d-4973-bbe7-ed43ce542bc0" volumeName="kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151479 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3828a1d-8180-4c7b-b423-4488f7fc0b76" volumeName="kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-default-certificate" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151490 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6eace9f-a52d-4570-a932-959538e1f2bc" volumeName="kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-catalog-content" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151498 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed1c4da2-564b-4354-a4ec-27b801098aa5" volumeName="kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151510 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31747c5d-7e29-4a74-b8d5-3d8efa5e900b" volumeName="kubernetes.io/configmap/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-config-volume" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151519 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151530 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567a9a33-1a82-4c48-b541-7e0eaae11f57" volumeName="kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-catalog-content" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151542 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96bd86df-2101-47f5-844b-1332261c66f1" volumeName="kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151551 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151561 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98d99166-c42a-4169-87e8-4209570aec50" volumeName="kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151571 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07542516-49c8-4e20-9b97-798fbff850a5" volumeName="kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151580 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17d2bb40-74e2-4894-a884-7018952bdf71" volumeName="kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151590 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="400a13b5-c489-4beb-af33-94e635b86148" volumeName="kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151601 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ebc9ee1-3913-4112-bb3f-c79f2c08032b" volumeName="kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151612 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52839a08-0871-44d3-9d22-b2f6b4383b99" volumeName="kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-tuned" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151624 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" volumeName="kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151634 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" volumeName="kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151662 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap" seLinuxMountContext="" Mar 12 21:08:59.151668 master-0 kubenswrapper[31456]: I0312 21:08:59.151672 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54184647-6e9a-43f7-90b1-5d8815f8b1ab" volumeName="kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151682 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a539e1c7-3799-4d43-8f2f-d5e5c0ffd918" volumeName="kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151703 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d1e064-c12b-4c1d-b499-4e301ca8a8dc" volumeName="kubernetes.io/projected/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-kube-api-access-n555w" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151728 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d1e064-c12b-4c1d-b499-4e301ca8a8dc" volumeName="kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151739 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea339fe1-c013-4c4b-90c9-aaaa7eb40d99" volumeName="kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151750 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-serving-ca" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151759 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784599a3-a2ac-46ac-a4b7-9439704646cc" volumeName="kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151781 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90f0e4da-71d4-4c4e-a2fc-9ef588daaf72" volumeName="kubernetes.io/projected/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-kube-api-access-2rfn6" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151792 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-serving-ca" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151803 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="508cb83e-6f25-4235-8c56-b25b762ebcad" volumeName="kubernetes.io/projected/508cb83e-6f25-4235-8c56-b25b762ebcad-kube-api-access-s4jzt" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151830 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-trusted-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151840 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e03d34d0-f7c1-4dcf-8b84-89ad647cc10f" volumeName="kubernetes.io/projected/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-kube-api-access-8ddw4" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151853 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07330030-487d-4fa6-b5c3-67607355bbba" volumeName="kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151863 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d6705e-e564-4774-94b4-ef11956c67b2" volumeName="kubernetes.io/projected/a5d6705e-e564-4774-94b4-ef11956c67b2-kube-api-access-dkvxh" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151872 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea339fe1-c013-4c4b-90c9-aaaa7eb40d99" volumeName="kubernetes.io/projected/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-kube-api-access-4l2sm" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151883 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" volumeName="kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151894 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="508cb83e-6f25-4235-8c56-b25b762ebcad" volumeName="kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151904 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151914 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d6705e-e564-4774-94b4-ef11956c67b2" volumeName="kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151924 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151935 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ad63582-bd60-41a1-9622-ee73ccf8a5e8" volumeName="kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151944 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151974 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf33c432-db42-4c6d-8ee4-f089e5bf8203" volumeName="kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-kube-api-access-x8hp5" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151986 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" volumeName="kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.151998 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152023 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152036 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" volumeName="kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152061 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3828a1d-8180-4c7b-b423-4488f7fc0b76" volumeName="kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-metrics-certs" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152070 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9152bd6-f203-469b-97fa-db274e43b40c" volumeName="kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152082 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152091 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90f0e4da-71d4-4c4e-a2fc-9ef588daaf72" volumeName="kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152101 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="067fdca7-c61d-470c-8421-73e0b62df3e4" volumeName="kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-apiservice-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152110 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="617f0f9c-50d5-4214-b30f-5110fd4399ec" volumeName="kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152119 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152130 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667a111-e744-47b2-9603-3864347dc738" volumeName="kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152139 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3828a1d-8180-4c7b-b423-4488f7fc0b76" volumeName="kubernetes.io/configmap/a3828a1d-8180-4c7b-b423-4488f7fc0b76-service-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152147 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152157 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15ebfbd8-0782-431a-88a3-83af328498d2" volumeName="kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152167 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152177 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e68ff0-f54d-4973-bbe7-ed43ce542bc0" volumeName="kubernetes.io/projected/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-kube-api-access-tpf99" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152187 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784599a3-a2ac-46ac-a4b7-9439704646cc" volumeName="kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152197 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152208 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b96dd10-18a0-49f8-b488-63fc2b23da39" volumeName="kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-ca-certs" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152218 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96bd86df-2101-47f5-844b-1332261c66f1" volumeName="kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152227 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" volumeName="kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152235 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-encryption-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152244 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6eace9f-a52d-4570-a932-959538e1f2bc" volumeName="kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-utilities" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152254 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed1c4da2-564b-4354-a4ec-27b801098aa5" volumeName="kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152264 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-audit-policies" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152273 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152281 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" volumeName="kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152291 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b7229c42-b6bc-4ea9-946c-71a4117f53e9" volumeName="kubernetes.io/projected/b7229c42-b6bc-4ea9-946c-71a4117f53e9-kube-api-access-xx5m2" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152300 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8660437-633f-4132-8a61-fe998abb493e" volumeName="kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152310 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf33c432-db42-4c6d-8ee4-f089e5bf8203" volumeName="kubernetes.io/empty-dir/cf33c432-db42-4c6d-8ee4-f089e5bf8203-cache" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152322 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52839a08-0871-44d3-9d22-b2f6b4383b99" volumeName="kubernetes.io/projected/52839a08-0871-44d3-9d22-b2f6b4383b99-kube-api-access-hlt7h" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152331 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667a111-e744-47b2-9603-3864347dc738" volumeName="kubernetes.io/projected/7667a111-e744-47b2-9603-3864347dc738-kube-api-access-mp84p" seLinuxMountContext="" Mar 12 21:08:59.152284 master-0 kubenswrapper[31456]: I0312 21:08:59.152341 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f3afe47-c537-420c-b5be-1cad612e119d" volumeName="kubernetes.io/projected/7f3afe47-c537-420c-b5be-1cad612e119d-kube-api-access-8745n" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152350 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980191fe-c62c-4b9e-879c-38fa8ce0a58b" volumeName="kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152361 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="980191fe-c62c-4b9e-879c-38fa8ce0a58b" volumeName="kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152372 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="067fdca7-c61d-470c-8421-73e0b62df3e4" volumeName="kubernetes.io/empty-dir/067fdca7-c61d-470c-8421-73e0b62df3e4-tmpfs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152382 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152393 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="400a13b5-c489-4beb-af33-94e635b86148" volumeName="kubernetes.io/projected/400a13b5-c489-4beb-af33-94e635b86148-kube-api-access-vt627" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152403 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="855747e5-d9b4-4eef-8bc4-425d6a8e95c7" volumeName="kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152412 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8467055-c9c9-4485-bb60-9a79e8b91268" volumeName="kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152422 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8467055-c9c9-4485-bb60-9a79e8b91268" volumeName="kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152434 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02649264-040a-41a6-9a41-8bf6416c68ff" volumeName="kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152444 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152454 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ebc9ee1-3913-4112-bb3f-c79f2c08032b" volumeName="kubernetes.io/empty-dir/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-volume-directive-shadow" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152464 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667a111-e744-47b2-9603-3864347dc738" volumeName="kubernetes.io/empty-dir/7667a111-e744-47b2-9603-3864347dc738-node-exporter-textfile" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152474 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152484 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="067fdca7-c61d-470c-8421-73e0b62df3e4" volumeName="kubernetes.io/projected/067fdca7-c61d-470c-8421-73e0b62df3e4-kube-api-access-tm7d5" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152495 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="135ec6f3-fbc0-4840-a4b1-c1124c705161" volumeName="kubernetes.io/projected/135ec6f3-fbc0-4840-a4b1-c1124c705161-kube-api-access-wsprq" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152505 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83368183-0368-44b1-9387-eed32b211988" volumeName="kubernetes.io/secret/83368183-0368-44b1-9387-eed32b211988-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152515 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152529 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="508cb83e-6f25-4235-8c56-b25b762ebcad" volumeName="kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152540 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" volumeName="kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152552 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152563 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-encryption-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152574 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" volumeName="kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152584 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d850d441-7505-4e81-b4cf-6e7a9911ae35" volumeName="kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152595 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed1c4da2-564b-4354-a4ec-27b801098aa5" volumeName="kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152617 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2604b035-853c-42b7-a562-07d46178868a" volumeName="kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152626 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b50a6106-1112-4a4b-b4ae-933879e12110" volumeName="kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152638 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b7229c42-b6bc-4ea9-946c-71a4117f53e9" volumeName="kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-utilities" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152648 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9152bd6-f203-469b-97fa-db274e43b40c" volumeName="kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152657 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea339fe1-c013-4c4b-90c9-aaaa7eb40d99" volumeName="kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152667 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="617f0f9c-50d5-4214-b30f-5110fd4399ec" volumeName="kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152678 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152689 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3828a1d-8180-4c7b-b423-4488f7fc0b76" volumeName="kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-stats-auth" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152699 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152711 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d850d441-7505-4e81-b4cf-6e7a9911ae35" volumeName="kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152721 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07542516-49c8-4e20-9b97-798fbff850a5" volumeName="kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152731 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="135ec6f3-fbc0-4840-a4b1-c1124c705161" volumeName="kubernetes.io/configmap/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-cabundle" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152740 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98d99166-c42a-4169-87e8-4209570aec50" volumeName="kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152750 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152760 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf33c432-db42-4c6d-8ee4-f089e5bf8203" volumeName="kubernetes.io/secret/cf33c432-db42-4c6d-8ee4-f089e5bf8203-catalogserver-certs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152770 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152798 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" volumeName="kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152818 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17d2bb40-74e2-4894-a884-7018952bdf71" volumeName="kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152828 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152840 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b50a6106-1112-4a4b-b4ae-933879e12110" volumeName="kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152850 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71376ea-e248-48fc-b2c4-1de7236ddd31" volumeName="kubernetes.io/projected/b71376ea-e248-48fc-b2c4-1de7236ddd31-kube-api-access-nlrzs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152860 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d850d441-7505-4e81-b4cf-6e7a9911ae35" volumeName="kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152893 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e624e623-6d59-444d-b548-165fa5fd2581" volumeName="kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152907 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83368183-0368-44b1-9387-eed32b211988" volumeName="kubernetes.io/projected/83368183-0368-44b1-9387-eed32b211988-kube-api-access" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152916 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" volumeName="kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152926 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b50a6106-1112-4a4b-b4ae-933879e12110" volumeName="kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152936 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea339fe1-c013-4c4b-90c9-aaaa7eb40d99" volumeName="kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152947 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32050f14-1939-41bf-a824-22016b90c189" volumeName="kubernetes.io/projected/32050f14-1939-41bf-a824-22016b90c189-kube-api-access-pbnbs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152956 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e68ff0-f54d-4973-bbe7-ed43ce542bc0" volumeName="kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152968 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8aa8296-ed9b-4b37-8ab4-791b1342140f" volumeName="kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152977 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf33c432-db42-4c6d-8ee4-f089e5bf8203" volumeName="kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152987 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da40e787-dd75-4f4f-b09e-a8dab590f260" volumeName="kubernetes.io/projected/da40e787-dd75-4f4f-b09e-a8dab590f260-kube-api-access-xg2ph" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.152998 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153008 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ebc9ee1-3913-4112-bb3f-c79f2c08032b" volumeName="kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153020 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54184647-6e9a-43f7-90b1-5d8815f8b1ab" volumeName="kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153034 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02649264-040a-41a6-9a41-8bf6416c68ff" volumeName="kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153047 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05fd1378-3935-4caf-96c5-17cf7e29417f" volumeName="kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153059 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-image-import-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153070 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e54b24-bf9d-42a8-b012-c7b073c6f6a6" volumeName="kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153081 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153091 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d6705e-e564-4774-94b4-ef11956c67b2" volumeName="kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153100 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71376ea-e248-48fc-b2c4-1de7236ddd31" volumeName="kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153111 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" volumeName="kubernetes.io/projected/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7-kube-api-access-mfspc" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153123 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ebc9ee1-3913-4112-bb3f-c79f2c08032b" volumeName="kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153133 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e54b24-bf9d-42a8-b012-c7b073c6f6a6" volumeName="kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153152 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667a111-e744-47b2-9603-3864347dc738" volumeName="kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153162 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b96dd10-18a0-49f8-b488-63fc2b23da39" volumeName="kubernetes.io/empty-dir/8b96dd10-18a0-49f8-b488-63fc2b23da39-cache" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153174 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a539e1c7-3799-4d43-8f2f-d5e5c0ffd918" volumeName="kubernetes.io/projected/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-kube-api-access-xth7s" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153186 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e624e623-6d59-444d-b548-165fa5fd2581" volumeName="kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153197 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="067fdca7-c61d-470c-8421-73e0b62df3e4" volumeName="kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-webhook-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153206 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153217 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153226 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8aa8296-ed9b-4b37-8ab4-791b1342140f" volumeName="kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153235 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e03d34d0-f7c1-4dcf-8b84-89ad647cc10f" volumeName="kubernetes.io/secret/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153246 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31747c5d-7e29-4a74-b8d5-3d8efa5e900b" volumeName="kubernetes.io/projected/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-kube-api-access-l2bmh" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153257 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05fd1378-3935-4caf-96c5-17cf7e29417f" volumeName="kubernetes.io/projected/05fd1378-3935-4caf-96c5-17cf7e29417f-kube-api-access-8xxkr" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153267 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15ebfbd8-0782-431a-88a3-83af328498d2" volumeName="kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153277 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f3afe47-c537-420c-b5be-1cad612e119d" volumeName="kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153287 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9152bd6-f203-469b-97fa-db274e43b40c" volumeName="kubernetes.io/projected/d9152bd6-f203-469b-97fa-db274e43b40c-kube-api-access-q9txs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153303 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c589179-0df4-4fe8-bfdd-965c3e7652c5" volumeName="kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-catalog-content" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153318 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3828a1d-8180-4c7b-b423-4488f7fc0b76" volumeName="kubernetes.io/projected/a3828a1d-8180-4c7b-b423-4488f7fc0b76-kube-api-access-lf28c" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153328 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/projected/36bd483b-292e-4e82-99d6-daa612cd385a-kube-api-access-fmcxd" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153339 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="784599a3-a2ac-46ac-a4b7-9439704646cc" volumeName="kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153348 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567a9a33-1a82-4c48-b541-7e0eaae11f57" volumeName="kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-utilities" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153360 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3daeefa-7842-464c-a6c9-01b44ebea477" volumeName="kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153370 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ebc9ee1-3913-4112-bb3f-c79f2c08032b" volumeName="kubernetes.io/projected/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-api-access-7gg7v" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153380 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4ebc9ee1-3913-4112-bb3f-c79f2c08032b" volumeName="kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153389 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153400 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b7229c42-b6bc-4ea9-946c-71a4117f53e9" volumeName="kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-catalog-content" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153410 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6eace9f-a52d-4570-a932-959538e1f2bc" volumeName="kubernetes.io/projected/d6eace9f-a52d-4570-a932-959538e1f2bc-kube-api-access-8l8qp" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153420 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b71f537-1cc2-4645-8e50-23941635457c" volumeName="kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153432 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153442 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153458 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90f0e4da-71d4-4c4e-a2fc-9ef588daaf72" volumeName="kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153468 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" volumeName="kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153477 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153486 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-client" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153496 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96bd86df-2101-47f5-844b-1332261c66f1" volumeName="kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153505 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" volumeName="kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153515 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc7b96ab-01af-442a-8eda-fc59e665a367" volumeName="kubernetes.io/projected/cc7b96ab-01af-442a-8eda-fc59e665a367-kube-api-access-vwqbt" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153532 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="02649264-040a-41a6-9a41-8bf6416c68ff" volumeName="kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153542 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33beea0b-f77b-4388-a9c8-5710f084f961" volumeName="kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153554 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="400a13b5-c489-4beb-af33-94e635b86148" volumeName="kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153564 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153573 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153585 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="135ec6f3-fbc0-4840-a4b1-c1124c705161" volumeName="kubernetes.io/secret/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-key" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153596 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17d2bb40-74e2-4894-a884-7018952bdf71" volumeName="kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153608 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567a9a33-1a82-4c48-b541-7e0eaae11f57" volumeName="kubernetes.io/projected/567a9a33-1a82-4c48-b541-7e0eaae11f57-kube-api-access-nzn6t" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153620 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="226cb3a1-984f-4410-96e6-c007131dc074" volumeName="kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153631 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153641 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" volumeName="kubernetes.io/projected/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-kube-api-access-qqhhz" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153650 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="900228dd-2d21-4759-87da-b027b0134ad8" volumeName="kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153669 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d1e064-c12b-4c1d-b499-4e301ca8a8dc" volumeName="kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-service-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153684 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e68ff0-f54d-4973-bbe7-ed43ce542bc0" volumeName="kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153695 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07330030-487d-4fa6-b5c3-67607355bbba" volumeName="kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153704 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07542516-49c8-4e20-9b97-798fbff850a5" volumeName="kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153723 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c589179-0df4-4fe8-bfdd-965c3e7652c5" volumeName="kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-utilities" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153743 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="508cb83e-6f25-4235-8c56-b25b762ebcad" volumeName="kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153755 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52839a08-0871-44d3-9d22-b2f6b4383b99" volumeName="kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-tmp" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153766 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b50a6106-1112-4a4b-b4ae-933879e12110" volumeName="kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153777 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71376ea-e248-48fc-b2c4-1de7236ddd31" volumeName="kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153787 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32050f14-1939-41bf-a824-22016b90c189" volumeName="kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153797 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="400a13b5-c489-4beb-af33-94e635b86148" volumeName="kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153818 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" volumeName="kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153829 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" volumeName="kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153839 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153850 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17d2bb40-74e2-4894-a884-7018952bdf71" volumeName="kubernetes.io/projected/17d2bb40-74e2-4894-a884-7018952bdf71-kube-api-access-lrm2z" seLinuxMountContext="" Mar 12 21:08:59.153704 master-0 kubenswrapper[31456]: I0312 21:08:59.153860 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="226cb3a1-984f-4410-96e6-c007131dc074" volumeName="kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153870 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3bebf49-1d92-4353-b84c-91ed86b7bb94" volumeName="kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153880 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5471994f-769e-4124-b7d0-01f5358fc18f" volumeName="kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153891 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="05fd1378-3935-4caf-96c5-17cf7e29417f" volumeName="kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153901 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="426efd5c-69e1-43e5-835a-6e1c4ef85720" volumeName="kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153913 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d1e064-c12b-4c1d-b499-4e301ca8a8dc" volumeName="kubernetes.io/empty-dir/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-snapshots" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153923 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d850d441-7505-4e81-b4cf-6e7a9911ae35" volumeName="kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153933 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d862a346-ec4d-46f6-a3e2-ea8759ea0111" volumeName="kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153944 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce" volumeName="kubernetes.io/projected/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-kube-api-access-vcmzz" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153953 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-trusted-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153963 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="83368183-0368-44b1-9387-eed32b211988" volumeName="kubernetes.io/configmap/83368183-0368-44b1-9387-eed32b211988-service-ca" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153974 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90f16d8c-25b6-4827-85d9-0995e4e1ab15" volumeName="kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153984 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d1e064-c12b-4c1d-b499-4e301ca8a8dc" volumeName="kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.153995 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e624e623-6d59-444d-b548-165fa5fd2581" volumeName="kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154004 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36bd483b-292e-4e82-99d6-daa612cd385a" volumeName="kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-client" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154012 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8660437-633f-4132-8a61-fe998abb493e" volumeName="kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154022 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8467055-c9c9-4485-bb60-9a79e8b91268" volumeName="kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154045 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" volumeName="kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154058 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7667a111-e744-47b2-9603-3864347dc738" volumeName="kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154068 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="855747e5-d9b4-4eef-8bc4-425d6a8e95c7" volumeName="kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154077 31456 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b96dd10-18a0-49f8-b488-63fc2b23da39" volumeName="kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-kube-api-access-nhhdz" seLinuxMountContext="" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154087 31456 reconstruct.go:97] "Volume reconstruction finished" Mar 12 21:08:59.157183 master-0 kubenswrapper[31456]: I0312 21:08:59.154094 31456 reconciler.go:26] "Reconciler: start to sync state" Mar 12 21:08:59.166626 master-0 kubenswrapper[31456]: I0312 21:08:59.166533 31456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 21:08:59.168232 master-0 kubenswrapper[31456]: I0312 21:08:59.168197 31456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 21:08:59.168321 master-0 kubenswrapper[31456]: I0312 21:08:59.168236 31456 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 21:08:59.168321 master-0 kubenswrapper[31456]: I0312 21:08:59.168261 31456 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 21:08:59.168321 master-0 kubenswrapper[31456]: E0312 21:08:59.168307 31456 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 21:08:59.207735 master-0 kubenswrapper[31456]: I0312 21:08:59.207604 31456 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="eb233dad973c14b986649aa9671fed2fa87adb0d7e06e94ac63133ff5838cbbe" exitCode=0 Mar 12 21:08:59.207735 master-0 kubenswrapper[31456]: I0312 21:08:59.207731 31456 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="07c6a141800c2671b4fee399e997579f35911c7306dc3f2e97ee3647edd96e2d" exitCode=0 Mar 12 21:08:59.207953 master-0 kubenswrapper[31456]: I0312 21:08:59.207748 31456 generic.go:334] "Generic (PLEG): container finished" podID="226cb3a1-984f-4410-96e6-c007131dc074" containerID="e46a8739f5b993539e6b61f8310bba6f93754f47cc10fbeca3d3b7bb6aa5cf59" exitCode=0 Mar 12 21:08:59.214060 master-0 kubenswrapper[31456]: I0312 21:08:59.214018 31456 generic.go:334] "Generic (PLEG): container finished" podID="5471994f-769e-4124-b7d0-01f5358fc18f" containerID="a84299e61aaa1595e3e07b0769d34f43309447a83e058608971fd9878868932d" exitCode=0 Mar 12 21:08:59.216156 master-0 kubenswrapper[31456]: I0312 21:08:59.216140 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_954fe7f9-e138-49ab-ab8e-504b75914100/installer/0.log" Mar 12 21:08:59.216244 master-0 kubenswrapper[31456]: I0312 21:08:59.216169 31456 generic.go:334] "Generic (PLEG): container finished" podID="954fe7f9-e138-49ab-ab8e-504b75914100" containerID="41e5296df7c3d4b1110f31058e02c84e5cd9852b203025b79d16be32d4b3de88" exitCode=1 Mar 12 21:08:59.218173 master-0 kubenswrapper[31456]: I0312 21:08:59.218156 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-jwthf_15ebfbd8-0782-431a-88a3-83af328498d2/openshift-apiserver-operator/1.log" Mar 12 21:08:59.218244 master-0 kubenswrapper[31456]: I0312 21:08:59.218183 31456 generic.go:334] "Generic (PLEG): container finished" podID="15ebfbd8-0782-431a-88a3-83af328498d2" containerID="ac220be40864e46bcbfeebc937d699a58348f8eb40ed949885e1f1fa2e71ed44" exitCode=255 Mar 12 21:08:59.221058 master-0 kubenswrapper[31456]: I0312 21:08:59.221018 31456 generic.go:334] "Generic (PLEG): container finished" podID="222b53b1-7e5c-49d5-9795-fec4d0547398" containerID="ab2ac0f8617112ac113b7f1e35ea96fef230316545e82d9bf694d881d7b9d213" exitCode=0 Mar 12 21:08:59.223001 master-0 kubenswrapper[31456]: I0312 21:08:59.222781 31456 generic.go:334] "Generic (PLEG): container finished" podID="d862a346-ec4d-46f6-a3e2-ea8759ea0111" containerID="29605d6c0d6bf29478ff9cad55789098714848ec2911515b3a1ba1a6b740cc37" exitCode=0 Mar 12 21:08:59.225884 master-0 kubenswrapper[31456]: I0312 21:08:59.225047 31456 generic.go:334] "Generic (PLEG): container finished" podID="4c589179-0df4-4fe8-bfdd-965c3e7652c5" containerID="148dd2cec7b5be28f9e435862613834e20183aa464b3a40bf9588ed300d0ce75" exitCode=0 Mar 12 21:08:59.225884 master-0 kubenswrapper[31456]: I0312 21:08:59.225068 31456 generic.go:334] "Generic (PLEG): container finished" podID="4c589179-0df4-4fe8-bfdd-965c3e7652c5" containerID="2343eedc615ca5a68e9b6c26c7cebd6a505b4d3931d7695418b25f7d657329ac" exitCode=0 Mar 12 21:08:59.232220 master-0 kubenswrapper[31456]: I0312 21:08:59.232013 31456 generic.go:334] "Generic (PLEG): container finished" podID="b7229c42-b6bc-4ea9-946c-71a4117f53e9" containerID="a9372e5a66ee073d516aa24c5b57ac0c91b01b45a59c442400035352b3c5eae6" exitCode=0 Mar 12 21:08:59.232220 master-0 kubenswrapper[31456]: I0312 21:08:59.232216 31456 generic.go:334] "Generic (PLEG): container finished" podID="b7229c42-b6bc-4ea9-946c-71a4117f53e9" containerID="ebc67e3afd812abeee907445ae9b930d7259656ae3cc6339095705aac5cecd88" exitCode=0 Mar 12 21:08:59.235093 master-0 kubenswrapper[31456]: I0312 21:08:59.235048 31456 generic.go:334] "Generic (PLEG): container finished" podID="7667a111-e744-47b2-9603-3864347dc738" containerID="4ae9acc07c3f6ce3eca66b7339a23374d2c3e5674298f965efd90da0b1f1e7df" exitCode=0 Mar 12 21:08:59.238528 master-0 kubenswrapper[31456]: I0312 21:08:59.238504 31456 generic.go:334] "Generic (PLEG): container finished" podID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerID="812a4d4164b66d6dc3ca8693d14eb3fcdb3c84deb2faed8cede318f4eacda9e5" exitCode=0 Mar 12 21:08:59.238628 master-0 kubenswrapper[31456]: I0312 21:08:59.238613 31456 generic.go:334] "Generic (PLEG): container finished" podID="980191fe-c62c-4b9e-879c-38fa8ce0a58b" containerID="accc03035ed32e15e8d41d3c28ac222345b1487c05148782dfac6e42d8ef00ab" exitCode=0 Mar 12 21:08:59.239767 master-0 kubenswrapper[31456]: E0312 21:08:59.239729 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.240772 master-0 kubenswrapper[31456]: I0312 21:08:59.240743 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-48hk7_426efd5c-69e1-43e5-835a-6e1c4ef85720/approver/1.log" Mar 12 21:08:59.241324 master-0 kubenswrapper[31456]: I0312 21:08:59.241282 31456 generic.go:334] "Generic (PLEG): container finished" podID="426efd5c-69e1-43e5-835a-6e1c4ef85720" containerID="26bae4b1151179f8943350ed41cce4211f30fc7d0bc576d35eb657f821dc0907" exitCode=1 Mar 12 21:08:59.243037 master-0 kubenswrapper[31456]: I0312 21:08:59.242993 31456 generic.go:334] "Generic (PLEG): container finished" podID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerID="e2916ee608198e843f503ac1b99774e97d332ea70158688e35693b97b4ee8573" exitCode=0 Mar 12 21:08:59.246494 master-0 kubenswrapper[31456]: I0312 21:08:59.246459 31456 generic.go:334] "Generic (PLEG): container finished" podID="567a9a33-1a82-4c48-b541-7e0eaae11f57" containerID="ef4905400a7b4f3b7293612d78dd05ee07faf771c60f7ce597f959bf755256e4" exitCode=0 Mar 12 21:08:59.246565 master-0 kubenswrapper[31456]: I0312 21:08:59.246497 31456 generic.go:334] "Generic (PLEG): container finished" podID="567a9a33-1a82-4c48-b541-7e0eaae11f57" containerID="5b959eb86868abbb3911c6888fbbe4637dd94eb120d52558a304ceb3cf5d43e3" exitCode=0 Mar 12 21:08:59.248117 master-0 kubenswrapper[31456]: I0312 21:08:59.248098 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-9j7rx_a3bebf49-1d92-4353-b84c-91ed86b7bb94/authentication-operator/1.log" Mar 12 21:08:59.248251 master-0 kubenswrapper[31456]: I0312 21:08:59.248233 31456 generic.go:334] "Generic (PLEG): container finished" podID="a3bebf49-1d92-4353-b84c-91ed86b7bb94" containerID="65753e4931b3081b10e537c0401b4155fdbc512202e120631ec6b784c53ee11c" exitCode=255 Mar 12 21:08:59.250616 master-0 kubenswrapper[31456]: I0312 21:08:59.250597 31456 generic.go:334] "Generic (PLEG): container finished" podID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerID="607e25a8dd52c1bd5d656d7e56ad63215f5d6ac7b9578ad98c15a18a5607da53" exitCode=0 Mar 12 21:08:59.253776 master-0 kubenswrapper[31456]: I0312 21:08:59.253752 31456 generic.go:334] "Generic (PLEG): container finished" podID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerID="99189d1662670a8accfafb7d98b62dd2bd3324bd586c75f160c786893e14a45b" exitCode=0 Mar 12 21:08:59.259316 master-0 kubenswrapper[31456]: I0312 21:08:59.259284 31456 generic.go:334] "Generic (PLEG): container finished" podID="7623a5c6-47a9-4b75-bda8-c0a2d7c67272" containerID="d768bc84b40192023bb465579879b2b58033844ecac405b3a22bcb789eb76d17" exitCode=0 Mar 12 21:08:59.266191 master-0 kubenswrapper[31456]: I0312 21:08:59.266154 31456 generic.go:334] "Generic (PLEG): container finished" podID="96bd86df-2101-47f5-844b-1332261c66f1" containerID="249a7dffa361592f6c3fc3dfb8d871762e2347411c14fdf281e698f89aa84b04" exitCode=0 Mar 12 21:08:59.268012 master-0 kubenswrapper[31456]: I0312 21:08:59.267946 31456 generic.go:334] "Generic (PLEG): container finished" podID="36bd483b-292e-4e82-99d6-daa612cd385a" containerID="267a64486f8cbc2e49d6948157350cf49703f8760c6b07509071b5afa54518d3" exitCode=0 Mar 12 21:08:59.268411 master-0 kubenswrapper[31456]: E0312 21:08:59.268395 31456 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 21:08:59.272424 master-0 kubenswrapper[31456]: I0312 21:08:59.272391 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_869e3d2a-1b5c-426f-945a-ddd44a9a5033/installer/0.log" Mar 12 21:08:59.272512 master-0 kubenswrapper[31456]: I0312 21:08:59.272430 31456 generic.go:334] "Generic (PLEG): container finished" podID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerID="36bfe1f3ee1124371de60181a0f2b9f61930c3b4af0a3a9413b95d937717a871" exitCode=1 Mar 12 21:08:59.276859 master-0 kubenswrapper[31456]: I0312 21:08:59.276795 31456 generic.go:334] "Generic (PLEG): container finished" podID="237e5a97-fb81-4609-8538-c55a8e2db411" containerID="9635b8a1063656701a872bccc0f8a9cd07d562ac36399e3e09153a9c74ff44b7" exitCode=0 Mar 12 21:08:59.278523 master-0 kubenswrapper[31456]: I0312 21:08:59.278429 31456 generic.go:334] "Generic (PLEG): container finished" podID="135ec6f3-fbc0-4840-a4b1-c1124c705161" containerID="46ded837719c01c62e0a027c72064dacb46bd2417ff8fe1a0f12a339ce0c296a" exitCode=0 Mar 12 21:08:59.281662 master-0 kubenswrapper[31456]: I0312 21:08:59.281638 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2" exitCode=0 Mar 12 21:08:59.283497 master-0 kubenswrapper[31456]: I0312 21:08:59.283459 31456 generic.go:334] "Generic (PLEG): container finished" podID="900228dd-2d21-4759-87da-b027b0134ad8" containerID="1746524fbf252ae2860d518e4df6a02c7aaf28a067d9493a2d0daedd8741f97f" exitCode=0 Mar 12 21:08:59.285150 master-0 kubenswrapper[31456]: I0312 21:08:59.285115 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 12 21:08:59.285483 master-0 kubenswrapper[31456]: I0312 21:08:59.285458 31456 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e" exitCode=1 Mar 12 21:08:59.285483 master-0 kubenswrapper[31456]: I0312 21:08:59.285476 31456 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="5aa72aa1d101c59af48adafd81202e715494ce655baaeb5ca917a23de1012db8" exitCode=0 Mar 12 21:08:59.287485 master-0 kubenswrapper[31456]: I0312 21:08:59.287451 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-sh67s_67e68ff0-f54d-4973-bbe7-ed43ce542bc0/machine-api-operator/0.log" Mar 12 21:08:59.287870 master-0 kubenswrapper[31456]: I0312 21:08:59.287840 31456 generic.go:334] "Generic (PLEG): container finished" podID="67e68ff0-f54d-4973-bbe7-ed43ce542bc0" containerID="b7d1be82f9f49361682b3eacda43c7c489bc2b5e8762684eea2266a906f1e97a" exitCode=255 Mar 12 21:08:59.289739 master-0 kubenswrapper[31456]: I0312 21:08:59.289702 31456 generic.go:334] "Generic (PLEG): container finished" podID="fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6" containerID="72fca1fe5edaa514a27832ab602fe41af2b798cb5366c953a186e585a0605c57" exitCode=0 Mar 12 21:08:59.295831 master-0 kubenswrapper[31456]: I0312 21:08:59.295768 31456 generic.go:334] "Generic (PLEG): container finished" podID="a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d" containerID="083e8e2171f84572bdd5f30426ffba317f16817f3ae58d7c00019c197700b69d" exitCode=0 Mar 12 21:08:59.301011 master-0 kubenswrapper[31456]: I0312 21:08:59.300957 31456 generic.go:334] "Generic (PLEG): container finished" podID="7f3afe47-c537-420c-b5be-1cad612e119d" containerID="36e67678697aff60b4f84c6384733c369857b33eb259f71b1dbb059fc06204fb" exitCode=0 Mar 12 21:08:59.319836 master-0 kubenswrapper[31456]: I0312 21:08:59.319097 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-qpf68_2b71f537-1cc2-4645-8e50-23941635457c/ingress-operator/4.log" Mar 12 21:08:59.320073 master-0 kubenswrapper[31456]: I0312 21:08:59.320028 31456 generic.go:334] "Generic (PLEG): container finished" podID="2b71f537-1cc2-4645-8e50-23941635457c" containerID="4c4d56e2fde6c2410a3aa723a3533a20727be585533619aed7037adf0a4a8960" exitCode=1 Mar 12 21:08:59.337105 master-0 kubenswrapper[31456]: I0312 21:08:59.337044 31456 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="9d4f8c64eddb4e3b0d519c870ca47049e39126a8c78d8b9d4e92971fdcedf0ce" exitCode=0 Mar 12 21:08:59.337105 master-0 kubenswrapper[31456]: I0312 21:08:59.337081 31456 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="e15e3282e5b40a84b8a52ea1ba64dbbfb71a2f40822a028fb5e47eb69a3af82b" exitCode=0 Mar 12 21:08:59.337105 master-0 kubenswrapper[31456]: I0312 21:08:59.337090 31456 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="6505ef13a4bc86d0ecb1621927f731e78b211dc76a1d482556926db3756019bd" exitCode=0 Mar 12 21:08:59.340113 master-0 kubenswrapper[31456]: E0312 21:08:59.340054 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.340113 master-0 kubenswrapper[31456]: I0312 21:08:59.340092 31456 generic.go:334] "Generic (PLEG): container finished" podID="b50a6106-1112-4a4b-b4ae-933879e12110" containerID="8dc00850a2298439a85382d76a3ffd123f490ec7c79324ad9a9c72fd9448c30b" exitCode=0 Mar 12 21:08:59.345392 master-0 kubenswrapper[31456]: I0312 21:08:59.345338 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3daeefa-7842-464c-a6c9-01b44ebea477" containerID="29a66354284f4876d7830823c349cadde817f41becb6c2b46ab19ae09fa84f0c" exitCode=0 Mar 12 21:08:59.348260 master-0 kubenswrapper[31456]: I0312 21:08:59.348213 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/config-sync-controllers/0.log" Mar 12 21:08:59.349082 master-0 kubenswrapper[31456]: I0312 21:08:59.349038 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-btpxl_f8467055-c9c9-4485-bb60-9a79e8b91268/cluster-cloud-controller-manager/0.log" Mar 12 21:08:59.349244 master-0 kubenswrapper[31456]: I0312 21:08:59.349108 31456 generic.go:334] "Generic (PLEG): container finished" podID="f8467055-c9c9-4485-bb60-9a79e8b91268" containerID="18344b8e4a33f4c35bb70a4b908fe016ad02097c53ac346b4a920c21a96bb7bc" exitCode=1 Mar 12 21:08:59.349244 master-0 kubenswrapper[31456]: I0312 21:08:59.349136 31456 generic.go:334] "Generic (PLEG): container finished" podID="f8467055-c9c9-4485-bb60-9a79e8b91268" containerID="35a48c44f0a4c7fdef814d1fdd69f5e797632637da5b33039378ae2cc0e1e688" exitCode=1 Mar 12 21:08:59.351005 master-0 kubenswrapper[31456]: I0312 21:08:59.350967 31456 generic.go:334] "Generic (PLEG): container finished" podID="e624e623-6d59-444d-b548-165fa5fd2581" containerID="39d3c428744e31947d0aba2cc71c1c8335e2ced3049d8e6b24468cee1c398ffb" exitCode=0 Mar 12 21:08:59.355950 master-0 kubenswrapper[31456]: I0312 21:08:59.355923 31456 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="dff388636097d32c6363bd0b2483f1d9c5210a858615e76eaa57853e4405a2b0" exitCode=0 Mar 12 21:08:59.355950 master-0 kubenswrapper[31456]: I0312 21:08:59.355944 31456 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="583c873e3d835c6e05c94172cd7043791e47625e0cc941a8a498c15d7dcde4e3" exitCode=0 Mar 12 21:08:59.356071 master-0 kubenswrapper[31456]: I0312 21:08:59.355953 31456 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="ba582835d70280ab686cd92c06c36d3f8c1b51d4a50b6f6d872889ebb52af604" exitCode=0 Mar 12 21:08:59.356071 master-0 kubenswrapper[31456]: I0312 21:08:59.355977 31456 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="f5be33e5e1cb19154b4137bf5e307d01b21c816569a4f493dfb02ba284a02c43" exitCode=0 Mar 12 21:08:59.356071 master-0 kubenswrapper[31456]: I0312 21:08:59.355984 31456 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="4ffd6f14ac61ffabe5bcfc6578f791f07638af2dede3fe79398a339525e37d25" exitCode=0 Mar 12 21:08:59.356071 master-0 kubenswrapper[31456]: I0312 21:08:59.355990 31456 generic.go:334] "Generic (PLEG): container finished" podID="a2545a80-0f00-4b19-ab3b-a9aa4bff98e8" containerID="f1489aa28f1df9edd0eec54c9b66a8a7e1d73e8d6be27d02b6cab3f145aeea26" exitCode=0 Mar 12 21:08:59.368774 master-0 kubenswrapper[31456]: I0312 21:08:59.368736 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-69rp9_981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9/cluster-node-tuning-operator/1.log" Mar 12 21:08:59.368880 master-0 kubenswrapper[31456]: I0312 21:08:59.368788 31456 generic.go:334] "Generic (PLEG): container finished" podID="981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9" containerID="1152dcaad32a43ba9e378941f51d853a2e7fc508d86ad05335f3c348f68fdd30" exitCode=1 Mar 12 21:08:59.372244 master-0 kubenswrapper[31456]: I0312 21:08:59.372196 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-fnxjc_17d2bb40-74e2-4894-a884-7018952bdf71/cluster-baremetal-operator/1.log" Mar 12 21:08:59.372562 master-0 kubenswrapper[31456]: I0312 21:08:59.372533 31456 generic.go:334] "Generic (PLEG): container finished" podID="17d2bb40-74e2-4894-a884-7018952bdf71" containerID="57afad4e3efc3237af416deb66bd4d026f0ff91e709bfe7cc68bb56bee784fe7" exitCode=1 Mar 12 21:08:59.374723 master-0 kubenswrapper[31456]: I0312 21:08:59.374703 31456 generic.go:334] "Generic (PLEG): container finished" podID="508cb83e-6f25-4235-8c56-b25b762ebcad" containerID="b9da34034a4775625020d205d9436694d65b54d0723190096309ce81aab32e93" exitCode=0 Mar 12 21:08:59.377267 master-0 kubenswrapper[31456]: I0312 21:08:59.377235 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:08:59.378049 master-0 kubenswrapper[31456]: I0312 21:08:59.378013 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" exitCode=255 Mar 12 21:08:59.382203 master-0 kubenswrapper[31456]: I0312 21:08:59.382166 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-hdd4n_8b96dd10-18a0-49f8-b488-63fc2b23da39/manager/1.log" Mar 12 21:08:59.382610 master-0 kubenswrapper[31456]: I0312 21:08:59.382573 31456 generic.go:334] "Generic (PLEG): container finished" podID="8b96dd10-18a0-49f8-b488-63fc2b23da39" containerID="41630d24dfd109bc636aa9398130da834c84ba29e895cfce030b4e66d9af23d1" exitCode=1 Mar 12 21:08:59.384251 master-0 kubenswrapper[31456]: I0312 21:08:59.384221 31456 generic.go:334] "Generic (PLEG): container finished" podID="90f0e4da-71d4-4c4e-a2fc-9ef588daaf72" containerID="abe372f4a5201ee9f2be20bd5b5a3dc0db95881ce3285f6e1c8475b0ef9714a6" exitCode=0 Mar 12 21:08:59.385847 master-0 kubenswrapper[31456]: I0312 21:08:59.385797 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-zgjqw_cf33c432-db42-4c6d-8ee4-f089e5bf8203/manager/1.log" Mar 12 21:08:59.386090 master-0 kubenswrapper[31456]: I0312 21:08:59.386054 31456 generic.go:334] "Generic (PLEG): container finished" podID="cf33c432-db42-4c6d-8ee4-f089e5bf8203" containerID="56254e13e7b801a5fa972ca401568f81e069fab8d80a9daa794e70d67c31681f" exitCode=1 Mar 12 21:08:59.390423 master-0 kubenswrapper[31456]: I0312 21:08:59.390388 31456 generic.go:334] "Generic (PLEG): container finished" podID="4a67ecf3-823d-4948-a5cb-8bd1eb9f259c" containerID="1d13c664a16a834bb594ce779624d3af44ce1b13763cae9c9fac074c11de4252" exitCode=0 Mar 12 21:08:59.392062 master-0 kubenswrapper[31456]: I0312 21:08:59.392024 31456 generic.go:334] "Generic (PLEG): container finished" podID="2604b035-853c-42b7-a562-07d46178868a" containerID="4c1c1c1b8851a87caaa47906af218c648432043d5537dde4d7c6aa9df599a39a" exitCode=0 Mar 12 21:08:59.397243 master-0 kubenswrapper[31456]: I0312 21:08:59.397206 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-qfbrj_07542516-49c8-4e20-9b97-798fbff850a5/kube-storage-version-migrator-operator/1.log" Mar 12 21:08:59.397243 master-0 kubenswrapper[31456]: I0312 21:08:59.397246 31456 generic.go:334] "Generic (PLEG): container finished" podID="07542516-49c8-4e20-9b97-798fbff850a5" containerID="ded70f8c305f91b4cd97482dbdf153ec9254b0cfdc370f5b14f5e7f5ee654d15" exitCode=255 Mar 12 21:08:59.399505 master-0 kubenswrapper[31456]: I0312 21:08:59.399483 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 12 21:08:59.399895 master-0 kubenswrapper[31456]: I0312 21:08:59.399868 31456 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="30bd0d1ae984ab9c16e404ca61f305cdc008b61e24e3fa41bdfaeaa497182321" exitCode=1 Mar 12 21:08:59.399895 master-0 kubenswrapper[31456]: I0312 21:08:59.399892 31456 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="960bfa0d0eebfdde5dda543dfe04a76816e7b84b67e487e2787a47f72cbbf5a5" exitCode=0 Mar 12 21:08:59.401943 master-0 kubenswrapper[31456]: I0312 21:08:59.401920 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-r6rcq_b71376ea-e248-48fc-b2c4-1de7236ddd31/cluster-autoscaler-operator/0.log" Mar 12 21:08:59.402303 master-0 kubenswrapper[31456]: I0312 21:08:59.402276 31456 generic.go:334] "Generic (PLEG): container finished" podID="b71376ea-e248-48fc-b2c4-1de7236ddd31" containerID="1174e3de7390f133d9714b1c4e07a2aef601c6b39a42d38f1fea541e106e1fb1" exitCode=255 Mar 12 21:08:59.404635 master-0 kubenswrapper[31456]: I0312 21:08:59.404608 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-8fk8w_d4a162d4-8086-4bcf-854d-7e6cd37fd4c7/snapshot-controller/4.log" Mar 12 21:08:59.404706 master-0 kubenswrapper[31456]: I0312 21:08:59.404638 31456 generic.go:334] "Generic (PLEG): container finished" podID="d4a162d4-8086-4bcf-854d-7e6cd37fd4c7" containerID="b4eac54179aa0f6fee4bb1e73d72504459ad2137a7bd3a9e3938754da7f51c6d" exitCode=1 Mar 12 21:08:59.407701 master-0 kubenswrapper[31456]: I0312 21:08:59.407668 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-xzwfp_e03d34d0-f7c1-4dcf-8b84-89ad647cc10f/control-plane-machine-set-operator/0.log" Mar 12 21:08:59.407787 master-0 kubenswrapper[31456]: I0312 21:08:59.407710 31456 generic.go:334] "Generic (PLEG): container finished" podID="e03d34d0-f7c1-4dcf-8b84-89ad647cc10f" containerID="5dd1e415f7dea320798ed071f084a01d7f961a59cb235657d89f90c5a715804d" exitCode=1 Mar 12 21:08:59.410520 master-0 kubenswrapper[31456]: I0312 21:08:59.410486 31456 generic.go:334] "Generic (PLEG): container finished" podID="70baf3e2-83bb-4156-afb3-30ca8e3d1d9d" containerID="63062433342e426f59b2ec0520cb717a967985a843175b969c1cc95d8f71e8d3" exitCode=0 Mar 12 21:08:59.413268 master-0 kubenswrapper[31456]: I0312 21:08:59.413243 31456 generic.go:334] "Generic (PLEG): container finished" podID="784599a3-a2ac-46ac-a4b7-9439704646cc" containerID="ab706de1955bf19700e84d8f799385030b60c4a92c4860f12c06db2b3816fd99" exitCode=0 Mar 12 21:08:59.416072 master-0 kubenswrapper[31456]: I0312 21:08:59.416027 31456 generic.go:334] "Generic (PLEG): container finished" podID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerID="2782822a08b1aa7b74a8813bdda6c24b76842bfecde841229b05dc04dcc388f3" exitCode=0 Mar 12 21:08:59.417304 master-0 kubenswrapper[31456]: I0312 21:08:59.417283 31456 generic.go:334] "Generic (PLEG): container finished" podID="367123ca-5a21-415c-8ac2-6d875696536b" containerID="73ffa716ed0ceb1f05c1ae94138aa9510898a766a0ea47f5fb2644e437ab8da6" exitCode=0 Mar 12 21:08:59.419363 master-0 kubenswrapper[31456]: I0312 21:08:59.419326 31456 generic.go:334] "Generic (PLEG): container finished" podID="d6eace9f-a52d-4570-a932-959538e1f2bc" containerID="37559cb1fc26e8f71d249fd47dc58f59a02dee845bd19ab0e20cc4ad87f91c1a" exitCode=0 Mar 12 21:08:59.419363 master-0 kubenswrapper[31456]: I0312 21:08:59.419352 31456 generic.go:334] "Generic (PLEG): container finished" podID="d6eace9f-a52d-4570-a932-959538e1f2bc" containerID="3f6a1c2c30754eda79aab1b24bbae4763c9876f50ed1598101e4f927c245331b" exitCode=0 Mar 12 21:08:59.424800 master-0 kubenswrapper[31456]: I0312 21:08:59.424764 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerID="53a1a855e95809da5db41ddc57b03bad15e98992f9948ca3ac283e20c3052783" exitCode=0 Mar 12 21:08:59.426723 master-0 kubenswrapper[31456]: I0312 21:08:59.426695 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-hj9bb_400a13b5-c489-4beb-af33-94e635b86148/machine-approver-controller/0.log" Mar 12 21:08:59.427029 master-0 kubenswrapper[31456]: I0312 21:08:59.427001 31456 generic.go:334] "Generic (PLEG): container finished" podID="400a13b5-c489-4beb-af33-94e635b86148" containerID="0a5780f6022da4e29888a4248f2002849d195cb3f0bde73988863a5f5ecbe533" exitCode=255 Mar 12 21:08:59.440316 master-0 kubenswrapper[31456]: E0312 21:08:59.440259 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.468782 master-0 kubenswrapper[31456]: E0312 21:08:59.468732 31456 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 12 21:08:59.542159 master-0 kubenswrapper[31456]: E0312 21:08:59.540900 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.641094 master-0 kubenswrapper[31456]: E0312 21:08:59.641028 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.690082 master-0 kubenswrapper[31456]: I0312 21:08:59.690047 31456 manager.go:324] Recovery completed Mar 12 21:08:59.747290 master-0 kubenswrapper[31456]: E0312 21:08:59.742940 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.825721 master-0 kubenswrapper[31456]: I0312 21:08:59.825689 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.829160 master-0 kubenswrapper[31456]: I0312 21:08:59.828892 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.829160 master-0 kubenswrapper[31456]: I0312 21:08:59.828943 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.829160 master-0 kubenswrapper[31456]: I0312 21:08:59.828960 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.833479 master-0 kubenswrapper[31456]: I0312 21:08:59.833430 31456 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 21:08:59.833479 master-0 kubenswrapper[31456]: I0312 21:08:59.833469 31456 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 21:08:59.833583 master-0 kubenswrapper[31456]: I0312 21:08:59.833523 31456 state_mem.go:36] "Initialized new in-memory state store" Mar 12 21:08:59.833789 master-0 kubenswrapper[31456]: I0312 21:08:59.833761 31456 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 21:08:59.833850 master-0 kubenswrapper[31456]: I0312 21:08:59.833777 31456 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 21:08:59.833850 master-0 kubenswrapper[31456]: I0312 21:08:59.833799 31456 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 12 21:08:59.833850 master-0 kubenswrapper[31456]: I0312 21:08:59.833829 31456 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 12 21:08:59.833850 master-0 kubenswrapper[31456]: I0312 21:08:59.833836 31456 policy_none.go:49] "None policy: Start" Mar 12 21:08:59.838200 master-0 kubenswrapper[31456]: I0312 21:08:59.838168 31456 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 21:08:59.838264 master-0 kubenswrapper[31456]: I0312 21:08:59.838214 31456 state_mem.go:35] "Initializing new in-memory state store" Mar 12 21:08:59.838534 master-0 kubenswrapper[31456]: I0312 21:08:59.838517 31456 state_mem.go:75] "Updated machine memory state" Mar 12 21:08:59.838585 master-0 kubenswrapper[31456]: I0312 21:08:59.838539 31456 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 12 21:08:59.843383 master-0 kubenswrapper[31456]: E0312 21:08:59.843326 31456 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 12 21:08:59.861473 master-0 kubenswrapper[31456]: I0312 21:08:59.861420 31456 manager.go:334] "Starting Device Plugin manager" Mar 12 21:08:59.861622 master-0 kubenswrapper[31456]: I0312 21:08:59.861569 31456 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 21:08:59.861622 master-0 kubenswrapper[31456]: I0312 21:08:59.861585 31456 server.go:79] "Starting device plugin registration server" Mar 12 21:08:59.862101 master-0 kubenswrapper[31456]: I0312 21:08:59.862081 31456 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 21:08:59.862150 master-0 kubenswrapper[31456]: I0312 21:08:59.862103 31456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 21:08:59.862610 master-0 kubenswrapper[31456]: I0312 21:08:59.862396 31456 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 21:08:59.862610 master-0 kubenswrapper[31456]: I0312 21:08:59.862485 31456 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 21:08:59.862610 master-0 kubenswrapper[31456]: I0312 21:08:59.862495 31456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 21:08:59.869196 master-0 kubenswrapper[31456]: I0312 21:08:59.869134 31456 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 12 21:08:59.869314 master-0 kubenswrapper[31456]: I0312 21:08:59.869231 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.871870 master-0 kubenswrapper[31456]: I0312 21:08:59.871836 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.871870 master-0 kubenswrapper[31456]: I0312 21:08:59.871869 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.872012 master-0 kubenswrapper[31456]: I0312 21:08:59.871880 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.872012 master-0 kubenswrapper[31456]: I0312 21:08:59.871978 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.872453 master-0 kubenswrapper[31456]: I0312 21:08:59.872430 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.872640 master-0 kubenswrapper[31456]: E0312 21:08:59.872614 31456 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 12 21:08:59.874662 master-0 kubenswrapper[31456]: I0312 21:08:59.874631 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.874731 master-0 kubenswrapper[31456]: I0312 21:08:59.874661 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.874731 master-0 kubenswrapper[31456]: I0312 21:08:59.874673 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.875157 master-0 kubenswrapper[31456]: I0312 21:08:59.875077 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.875157 master-0 kubenswrapper[31456]: I0312 21:08:59.875117 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.875157 master-0 kubenswrapper[31456]: I0312 21:08:59.875136 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.875473 master-0 kubenswrapper[31456]: I0312 21:08:59.875360 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.875621 master-0 kubenswrapper[31456]: I0312 21:08:59.875585 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.878656 master-0 kubenswrapper[31456]: I0312 21:08:59.878593 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.878860 master-0 kubenswrapper[31456]: I0312 21:08:59.878835 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.878921 master-0 kubenswrapper[31456]: I0312 21:08:59.878865 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.878921 master-0 kubenswrapper[31456]: I0312 21:08:59.878887 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.879015 master-0 kubenswrapper[31456]: I0312 21:08:59.878901 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.879158 master-0 kubenswrapper[31456]: I0312 21:08:59.878932 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.879592 master-0 kubenswrapper[31456]: I0312 21:08:59.879547 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.879732 master-0 kubenswrapper[31456]: I0312 21:08:59.879707 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.883427 master-0 kubenswrapper[31456]: I0312 21:08:59.883398 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.883491 master-0 kubenswrapper[31456]: I0312 21:08:59.883426 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.883491 master-0 kubenswrapper[31456]: I0312 21:08:59.883431 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.883491 master-0 kubenswrapper[31456]: I0312 21:08:59.883446 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.883491 master-0 kubenswrapper[31456]: I0312 21:08:59.883453 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.883491 master-0 kubenswrapper[31456]: I0312 21:08:59.883458 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.883707 master-0 kubenswrapper[31456]: I0312 21:08:59.883570 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.883707 master-0 kubenswrapper[31456]: I0312 21:08:59.883696 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.886333 master-0 kubenswrapper[31456]: I0312 21:08:59.886313 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.886459 master-0 kubenswrapper[31456]: I0312 21:08:59.886444 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.886563 master-0 kubenswrapper[31456]: I0312 21:08:59.886548 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.886900 master-0 kubenswrapper[31456]: I0312 21:08:59.886879 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.887054 master-0 kubenswrapper[31456]: I0312 21:08:59.887005 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.887797 master-0 kubenswrapper[31456]: I0312 21:08:59.887745 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.887797 master-0 kubenswrapper[31456]: I0312 21:08:59.887779 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.887797 master-0 kubenswrapper[31456]: I0312 21:08:59.887787 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.891118 master-0 kubenswrapper[31456]: I0312 21:08:59.891039 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.891118 master-0 kubenswrapper[31456]: I0312 21:08:59.891087 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.891118 master-0 kubenswrapper[31456]: I0312 21:08:59.891107 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.898077 master-0 kubenswrapper[31456]: I0312 21:08:59.898043 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.898188 master-0 kubenswrapper[31456]: I0312 21:08:59.898090 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.898188 master-0 kubenswrapper[31456]: I0312 21:08:59.898111 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.898295 master-0 kubenswrapper[31456]: I0312 21:08:59.898275 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ca9cb8afb78daa40b60fb8598538d996992c55bbb55bf6668f862728b14188" Mar 12 21:08:59.898345 master-0 kubenswrapper[31456]: I0312 21:08:59.898303 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd4ab457c36b4a666cc4b9eccf84f6ef45f43cd01a0b7df77a1a58dcfa9aeee" Mar 12 21:08:59.898490 master-0 kubenswrapper[31456]: I0312 21:08:59.898463 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.898621 master-0 kubenswrapper[31456]: I0312 21:08:59.898602 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae91d361ecd061c9426dd23452fb232725e7fad18fb34be8d38d0dd0d590d9fe" Mar 12 21:08:59.898683 master-0 kubenswrapper[31456]: I0312 21:08:59.898673 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ebefd5475e972825bea2703209db4a6c19fbc87674636be31770baa8cd7873b" Mar 12 21:08:59.898748 master-0 kubenswrapper[31456]: I0312 21:08:59.898738 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c9b7d298a5e9f87b7b79f9bc1b7d09be186a38e9c6487e815fa087b10965ba" Mar 12 21:08:59.898911 master-0 kubenswrapper[31456]: I0312 21:08:59.898895 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57edb20a691b07071028f2edb064ac37f76c164057bb37d7d87a25a08a74d8a6" Mar 12 21:08:59.898999 master-0 kubenswrapper[31456]: I0312 21:08:59.898987 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb5ded3b742edb3299ed1f6753980b1fd1f4f50b6f5c825c2828acef79cb23f" Mar 12 21:08:59.899142 master-0 kubenswrapper[31456]: I0312 21:08:59.899064 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"1867cbd1eea641a204f5d8db13d19bc48d06f54cf7a7cbc0d8d91fbb925b3a69"} Mar 12 21:08:59.899216 master-0 kubenswrapper[31456]: I0312 21:08:59.899202 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339"} Mar 12 21:08:59.899277 master-0 kubenswrapper[31456]: I0312 21:08:59.899266 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447"} Mar 12 21:08:59.899344 master-0 kubenswrapper[31456]: I0312 21:08:59.899331 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e"} Mar 12 21:08:59.899403 master-0 kubenswrapper[31456]: I0312 21:08:59.899391 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50"} Mar 12 21:08:59.899460 master-0 kubenswrapper[31456]: I0312 21:08:59.899449 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2"} Mar 12 21:08:59.899519 master-0 kubenswrapper[31456]: I0312 21:08:59.899508 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"305e45867f0f5c512d8dca3c39de15088c17eab90b2969aafd739643c4b112ce"} Mar 12 21:08:59.899582 master-0 kubenswrapper[31456]: I0312 21:08:59.899571 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"6f5c19a3178e0ac81f6a0a19cf655238a7d3c02526a49af4ee450188873df923"} Mar 12 21:08:59.899644 master-0 kubenswrapper[31456]: I0312 21:08:59.899633 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"faa71480f217fad716866bc98bd8270b2f07bd2a29f5aa069d90b575671a024e"} Mar 12 21:08:59.899703 master-0 kubenswrapper[31456]: I0312 21:08:59.899692 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"5aa72aa1d101c59af48adafd81202e715494ce655baaeb5ca917a23de1012db8"} Mar 12 21:08:59.899763 master-0 kubenswrapper[31456]: I0312 21:08:59.899750 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"565b353628a1ea63b479d26fa571cd76b79a30c51d66ca013ff8e18be2cee52e"} Mar 12 21:08:59.899872 master-0 kubenswrapper[31456]: I0312 21:08:59.899858 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5dabe459737d88ce0a8534bf402fd762e6432002a626a37ebf731dead719fc05"} Mar 12 21:08:59.899939 master-0 kubenswrapper[31456]: I0312 21:08:59.899927 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"5194be401cfedf1aa9a9ba57a34137d50e6645b8ccc15b839c616a43fc6af7a9"} Mar 12 21:08:59.899999 master-0 kubenswrapper[31456]: I0312 21:08:59.899989 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"4a3be27297fda6b8121c5fd145a33a08f85b4f6d139551bd4d8fd9681ff6723c"} Mar 12 21:08:59.900060 master-0 kubenswrapper[31456]: I0312 21:08:59.900049 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6af4e71895ff4fe118c23997aeb93f4e84c0f4154b54aa19f8abbc54a8539be2"} Mar 12 21:08:59.900270 master-0 kubenswrapper[31456]: I0312 21:08:59.900107 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"c526dbf7ac382686d170fe998cb948c25a4b677046ba65421a6b20f7b8069320"} Mar 12 21:08:59.900328 master-0 kubenswrapper[31456]: I0312 21:08:59.900318 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"9d4f8c64eddb4e3b0d519c870ca47049e39126a8c78d8b9d4e92971fdcedf0ce"} Mar 12 21:08:59.900387 master-0 kubenswrapper[31456]: I0312 21:08:59.900376 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"e15e3282e5b40a84b8a52ea1ba64dbbfb71a2f40822a028fb5e47eb69a3af82b"} Mar 12 21:08:59.900446 master-0 kubenswrapper[31456]: I0312 21:08:59.900436 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"6505ef13a4bc86d0ecb1621927f731e78b211dc76a1d482556926db3756019bd"} Mar 12 21:08:59.900508 master-0 kubenswrapper[31456]: I0312 21:08:59.900497 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6b1f470bfc702853e69b48b7d0f79deb1d8d72a0d84adbdf6326a6040a96126e"} Mar 12 21:08:59.900596 master-0 kubenswrapper[31456]: I0312 21:08:59.900584 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec"} Mar 12 21:08:59.900656 master-0 kubenswrapper[31456]: I0312 21:08:59.900645 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"873fdfa9ac893a2fcdda2a0631dc6e4eee04d1b74ee51efc77199a0762ee41f6"} Mar 12 21:08:59.900760 master-0 kubenswrapper[31456]: I0312 21:08:59.900749 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"b626b2974550fdcabce6b08a32cc3b1da47078dee2fd1671f52a14cd3557b052"} Mar 12 21:08:59.900841 master-0 kubenswrapper[31456]: I0312 21:08:59.900829 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770"} Mar 12 21:08:59.900936 master-0 kubenswrapper[31456]: I0312 21:08:59.900923 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"aadc37b9873c997339d04dc5e3aaeecb47d5f57228484f7cca80ac879f4002d2"} Mar 12 21:08:59.901019 master-0 kubenswrapper[31456]: I0312 21:08:59.901007 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"1d02987cfd443da7225f0df6b3ab9f45e0b88c2171ab5627f4e3845fc50178ec"} Mar 12 21:08:59.901104 master-0 kubenswrapper[31456]: I0312 21:08:59.901091 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"d3c7faffe68717f40a0072b4ab6a64ec7cccad22e04a4674b15d395e19ec5ebe"} Mar 12 21:08:59.901195 master-0 kubenswrapper[31456]: I0312 21:08:59.901183 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6"} Mar 12 21:08:59.901363 master-0 kubenswrapper[31456]: I0312 21:08:59.901126 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.901432 master-0 kubenswrapper[31456]: I0312 21:08:59.901367 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.901432 master-0 kubenswrapper[31456]: I0312 21:08:59.901379 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.901432 master-0 kubenswrapper[31456]: I0312 21:08:59.901341 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"cca1a31a16c786b4a0358e88dbe17ead89f8ea362282d9e8446c5bfcda9a2898"} Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901482 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"a96c0be5068b40870e476008e5515f8b602a69ab55e721b1f3a3f75a76b3a98f"} Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901525 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"fd67aa7de049fcfa1b2eebc98d90103ccc7e8a5a9b9e08168649d625c912f99e"} Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901541 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"30bd0d1ae984ab9c16e404ca61f305cdc008b61e24e3fa41bdfaeaa497182321"} Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901571 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"960bfa0d0eebfdde5dda543dfe04a76816e7b84b67e487e2787a47f72cbbf5a5"} Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901588 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"6353db57cf3b1f293a822286253318b9d39e924d2e8facf90ba120b1780e8395"} Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901657 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50107dedd1c9152a5e5a3ba57f0fbbfdfa748f7e7733cd6fddf45dabf0eb60d" Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901681 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fc84c4a8eee335ea22dc095e587b155c6991b713fe7ec213d1940d68351e07" Mar 12 21:08:59.901944 master-0 kubenswrapper[31456]: I0312 21:08:59.901715 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052a8ea937b1e18a23a6811afe7fcef8bdf2f48672ff3e7a1ee17b5ba2abf923" Mar 12 21:08:59.967827 master-0 kubenswrapper[31456]: I0312 21:08:59.962279 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:08:59.979839 master-0 kubenswrapper[31456]: I0312 21:08:59.979696 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:08:59.979839 master-0 kubenswrapper[31456]: I0312 21:08:59.979742 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:08:59.979839 master-0 kubenswrapper[31456]: I0312 21:08:59.979758 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:08:59.979839 master-0 kubenswrapper[31456]: I0312 21:08:59.979791 31456 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 21:08:59.991476 master-0 kubenswrapper[31456]: E0312 21:08:59.984076 31456 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 21:09:00.185495 master-0 kubenswrapper[31456]: I0312 21:09:00.185344 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:09:00.189636 master-0 kubenswrapper[31456]: I0312 21:09:00.189573 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:09:00.189636 master-0 kubenswrapper[31456]: I0312 21:09:00.189632 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:09:00.189847 master-0 kubenswrapper[31456]: I0312 21:09:00.189650 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:09:00.189847 master-0 kubenswrapper[31456]: I0312 21:09:00.189681 31456 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 21:09:00.194387 master-0 kubenswrapper[31456]: E0312 21:09:00.194200 31456 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 21:09:00.595370 master-0 kubenswrapper[31456]: I0312 21:09:00.595320 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:09:00.599797 master-0 kubenswrapper[31456]: I0312 21:09:00.599764 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:09:00.600089 master-0 kubenswrapper[31456]: I0312 21:09:00.600064 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:09:00.600243 master-0 kubenswrapper[31456]: I0312 21:09:00.600221 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:09:00.600401 master-0 kubenswrapper[31456]: I0312 21:09:00.600380 31456 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 21:09:00.604351 master-0 kubenswrapper[31456]: E0312 21:09:00.604317 31456 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 21:09:01.404754 master-0 kubenswrapper[31456]: I0312 21:09:01.404658 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:09:01.408509 master-0 kubenswrapper[31456]: I0312 21:09:01.408436 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:09:01.408693 master-0 kubenswrapper[31456]: I0312 21:09:01.408533 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:09:01.408693 master-0 kubenswrapper[31456]: I0312 21:09:01.408562 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:09:01.408693 master-0 kubenswrapper[31456]: I0312 21:09:01.408631 31456 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 21:09:01.413094 master-0 kubenswrapper[31456]: E0312 21:09:01.413032 31456 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 21:09:02.629755 master-0 kubenswrapper[31456]: E0312 21:09:02.629635 31456 resource_metrics.go:161] "Error getting summary for resourceMetric prometheus endpoint" err="failed to get node info: node \"master-0\" not found" Mar 12 21:09:03.014096 master-0 kubenswrapper[31456]: I0312 21:09:03.013905 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:09:03.017714 master-0 kubenswrapper[31456]: I0312 21:09:03.017653 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:09:03.017714 master-0 kubenswrapper[31456]: I0312 21:09:03.017707 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:09:03.017714 master-0 kubenswrapper[31456]: I0312 21:09:03.017718 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:09:03.018143 master-0 kubenswrapper[31456]: I0312 21:09:03.017743 31456 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 21:09:03.022884 master-0 kubenswrapper[31456]: E0312 21:09:03.022817 31456 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 12 21:09:04.129557 master-0 kubenswrapper[31456]: I0312 21:09:04.129491 31456 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 21:09:04.130184 master-0 kubenswrapper[31456]: I0312 21:09:04.129692 31456 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 21:09:04.143138 master-0 kubenswrapper[31456]: I0312 21:09:04.143074 31456 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 21:09:04.160299 master-0 kubenswrapper[31456]: I0312 21:09:04.160243 31456 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 21:09:04.172491 master-0 kubenswrapper[31456]: I0312 21:09:04.172432 31456 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 21:09:04.260995 master-0 kubenswrapper[31456]: I0312 21:09:04.260938 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.260995 master-0 kubenswrapper[31456]: I0312 21:09:04.260987 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261017 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261034 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261060 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261084 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261154 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261399 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261423 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261448 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261470 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261493 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.261518 master-0 kubenswrapper[31456]: I0312 21:09:04.261518 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261552 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261611 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261714 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261777 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261861 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261899 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.262084 master-0 kubenswrapper[31456]: I0312 21:09:04.261968 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.362626 master-0 kubenswrapper[31456]: I0312 21:09:04.362561 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.362867 master-0 kubenswrapper[31456]: I0312 21:09:04.362604 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.362867 master-0 kubenswrapper[31456]: I0312 21:09:04.362766 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.362867 master-0 kubenswrapper[31456]: I0312 21:09:04.362852 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.363071 master-0 kubenswrapper[31456]: I0312 21:09:04.362899 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.363071 master-0 kubenswrapper[31456]: I0312 21:09:04.362945 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.363071 master-0 kubenswrapper[31456]: I0312 21:09:04.362987 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:04.363071 master-0 kubenswrapper[31456]: I0312 21:09:04.363030 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:04.363340 master-0 kubenswrapper[31456]: I0312 21:09:04.363084 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:04.363340 master-0 kubenswrapper[31456]: I0312 21:09:04.363202 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:04.363340 master-0 kubenswrapper[31456]: I0312 21:09:04.363273 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363340 master-0 kubenswrapper[31456]: I0312 21:09:04.363326 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363572 master-0 kubenswrapper[31456]: I0312 21:09:04.363359 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:04.363572 master-0 kubenswrapper[31456]: I0312 21:09:04.363398 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363572 master-0 kubenswrapper[31456]: I0312 21:09:04.363439 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363572 master-0 kubenswrapper[31456]: I0312 21:09:04.363372 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363572 master-0 kubenswrapper[31456]: I0312 21:09:04.363450 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363873 master-0 kubenswrapper[31456]: I0312 21:09:04.363568 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:04.363873 master-0 kubenswrapper[31456]: I0312 21:09:04.363650 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.363873 master-0 kubenswrapper[31456]: I0312 21:09:04.363724 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.363873 master-0 kubenswrapper[31456]: I0312 21:09:04.363784 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.364084 master-0 kubenswrapper[31456]: I0312 21:09:04.363900 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.364084 master-0 kubenswrapper[31456]: I0312 21:09:04.363956 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:04.364084 master-0 kubenswrapper[31456]: I0312 21:09:04.363992 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.364084 master-0 kubenswrapper[31456]: I0312 21:09:04.364031 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.364084 master-0 kubenswrapper[31456]: I0312 21:09:04.364054 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:04.364084 master-0 kubenswrapper[31456]: I0312 21:09:04.364053 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364098 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364102 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364135 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364157 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364187 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364207 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364249 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364250 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364275 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.364312 master-0 kubenswrapper[31456]: I0312 21:09:04.364306 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.364584 master-0 kubenswrapper[31456]: I0312 21:09:04.364325 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.364584 master-0 kubenswrapper[31456]: I0312 21:09:04.364305 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.364584 master-0 kubenswrapper[31456]: I0312 21:09:04.364354 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.380689 master-0 kubenswrapper[31456]: I0312 21:09:04.380577 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.384628 master-0 kubenswrapper[31456]: I0312 21:09:04.384575 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.384799 master-0 kubenswrapper[31456]: I0312 21:09:04.384755 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.388873 master-0 kubenswrapper[31456]: I0312 21:09:04.388831 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:04.393962 master-0 kubenswrapper[31456]: I0312 21:09:04.393923 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.491127 master-0 kubenswrapper[31456]: E0312 21:09:04.491069 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:04.499194 master-0 kubenswrapper[31456]: E0312 21:09:04.499148 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:04.499695 master-0 kubenswrapper[31456]: E0312 21:09:04.499655 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 12 21:09:04.501545 master-0 kubenswrapper[31456]: E0312 21:09:04.501513 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:05.121948 master-0 kubenswrapper[31456]: I0312 21:09:05.121897 31456 apiserver.go:52] "Watching apiserver" Mar 12 21:09:05.159526 master-0 kubenswrapper[31456]: I0312 21:09:05.159469 31456 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 21:09:05.166309 master-0 kubenswrapper[31456]: I0312 21:09:05.166234 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7c649bf6d4-62t2f","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp","openshift-marketplace/community-operators-jblsg","openshift-cluster-node-tuning-operator/tuned-btxk2","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-ovn-kubernetes/ovnkube-node-nhrpd","openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt","openshift-oauth-apiserver/apiserver-7946996f87-nzb7c","openshift-kube-scheduler/installer-4-master-0","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv","assisted-installer/assisted-installer-controller-jffs8","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w","openshift-etcd/installer-1-master-0","openshift-ingress-canary/ingress-canary-67vs7","openshift-insights/insights-operator-8f89dfddd-lc7jk","openshift-machine-config-operator/machine-config-server-mz2sr","openshift-config-operator/openshift-config-operator-64488f9d78-zsd76","openshift-ingress-operator/ingress-operator-677db989d6-qpf68","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-marketplace/redhat-operators-gxjmz","openshift-multus/network-metrics-daemon-brdcd","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf","openshift-apiserver/apiserver-84fb785f4-kl52q","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs","openshift-dns-operator/dns-operator-589895fbb7-tvrxp","openshift-dns/dns-default-pp258","openshift-monitoring/node-exporter-lkmd7","openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8","openshift-machine-config-operator/machine-config-daemon-n5wh9","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk","openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs","openshift-kube-controller-manager/installer-3-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw","openshift-ingress/router-default-79f8cd6fdd-hsv57","openshift-monitoring/metrics-server-5bbfd655db-2tsb8","openshift-network-operator/iptables-alerter-krpjj","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8","openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr","openshift-network-diagnostics/network-check-target-h26wj","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8","openshift-network-node-identity/network-node-identity-48hk7","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk","openshift-kube-apiserver/installer-3-master-0","openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw","openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd","openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9","openshift-etcd/etcd-master-0","openshift-multus/multus-gnmmm","openshift-service-ca/service-ca-84bfdbbb7f-4zjqp","openshift-controller-manager/controller-manager-759579d7c9-wjl25","openshift-dns/node-resolver-9t4hh","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg","openshift-etcd/installer-2-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-marketplace/certified-operators-94rll","openshift-marketplace/redhat-marketplace-66qvj","openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf","openshift-operator-lifecycle-manager/packageserver-659d778978-djtms","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5","openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s","openshift-multus/multus-additional-cni-plugins-trlxw","openshift-multus/multus-admission-controller-7769569c45-tgbjx","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk"] Mar 12 21:09:05.168346 master-0 kubenswrapper[31456]: I0312 21:09:05.167308 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-jffs8" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171080 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171135 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171136 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171164 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171425 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171445 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171484 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171500 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.171839 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 21:09:05.173420 master-0 kubenswrapper[31456]: I0312 21:09:05.172963 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 21:09:05.173757 master-0 kubenswrapper[31456]: I0312 21:09:05.173439 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 21:09:05.173757 master-0 kubenswrapper[31456]: I0312 21:09:05.173527 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 21:09:05.173757 master-0 kubenswrapper[31456]: I0312 21:09:05.173658 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.173757 master-0 kubenswrapper[31456]: I0312 21:09:05.173697 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 21:09:05.181625 master-0 kubenswrapper[31456]: I0312 21:09:05.178090 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 21:09:05.181625 master-0 kubenswrapper[31456]: I0312 21:09:05.179035 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 21:09:05.181625 master-0 kubenswrapper[31456]: I0312 21:09:05.179128 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.181625 master-0 kubenswrapper[31456]: I0312 21:09:05.179152 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 21:09:05.181625 master-0 kubenswrapper[31456]: I0312 21:09:05.179214 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 21:09:05.181625 master-0 kubenswrapper[31456]: I0312 21:09:05.179496 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 21:09:05.181940 master-0 kubenswrapper[31456]: I0312 21:09:05.181754 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 21:09:05.185738 master-0 kubenswrapper[31456]: I0312 21:09:05.185608 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 21:09:05.194547 master-0 kubenswrapper[31456]: I0312 21:09:05.186370 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 21:09:05.194547 master-0 kubenswrapper[31456]: I0312 21:09:05.187039 31456 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="33cdd0bf-9c54-42b1-a5a4-7c5725708df2" Mar 12 21:09:05.194547 master-0 kubenswrapper[31456]: I0312 21:09:05.188492 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 21:09:05.194547 master-0 kubenswrapper[31456]: I0312 21:09:05.188568 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 21:09:05.204920 master-0 kubenswrapper[31456]: I0312 21:09:05.204495 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 12 21:09:05.211862 master-0 kubenswrapper[31456]: I0312 21:09:05.211559 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.212075 master-0 kubenswrapper[31456]: I0312 21:09:05.212035 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 12 21:09:05.214260 master-0 kubenswrapper[31456]: I0312 21:09:05.214222 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 12 21:09:05.214976 master-0 kubenswrapper[31456]: I0312 21:09:05.214941 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 12 21:09:05.216072 master-0 kubenswrapper[31456]: I0312 21:09:05.216040 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 21:09:05.216348 master-0 kubenswrapper[31456]: I0312 21:09:05.216320 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.216555 master-0 kubenswrapper[31456]: I0312 21:09:05.216538 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217191 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217238 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217541 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217546 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217659 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217682 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217746 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217792 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217851 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217856 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 21:09:05.217883 master-0 kubenswrapper[31456]: I0312 21:09:05.217798 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.217953 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.217964 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218026 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218058 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218103 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218154 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218235 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218276 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218421 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218456 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218513 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218592 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218623 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218647 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218687 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218709 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218722 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218760 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218777 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218788 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218826 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218916 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218938 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218951 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218991 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218709 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219032 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218281 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218943 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218562 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219032 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218567 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219116 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218545 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219110 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219162 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219207 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218599 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.218457 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219315 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.219939 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220130 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220156 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220160 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220240 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220487 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220592 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220657 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220741 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220774 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220600 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220870 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220962 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220743 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.220912 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.221051 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.221428 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 21:09:05.222401 master-0 kubenswrapper[31456]: I0312 21:09:05.221649 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 21:09:05.225721 master-0 kubenswrapper[31456]: I0312 21:09:05.224273 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.227522 master-0 kubenswrapper[31456]: I0312 21:09:05.227498 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 21:09:05.228712 master-0 kubenswrapper[31456]: I0312 21:09:05.228667 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 21:09:05.231212 master-0 kubenswrapper[31456]: I0312 21:09:05.231178 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 21:09:05.239874 master-0 kubenswrapper[31456]: I0312 21:09:05.239824 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 21:09:05.258981 master-0 kubenswrapper[31456]: I0312 21:09:05.258929 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 21:09:05.260286 master-0 kubenswrapper[31456]: I0312 21:09:05.260256 31456 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 12 21:09:05.260513 master-0 kubenswrapper[31456]: I0312 21:09:05.260472 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 12 21:09:05.260968 master-0 kubenswrapper[31456]: I0312 21:09:05.260940 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 21:09:05.269201 master-0 kubenswrapper[31456]: I0312 21:09:05.269160 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 21:09:05.271433 master-0 kubenswrapper[31456]: I0312 21:09:05.271396 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 21:09:05.271470 master-0 kubenswrapper[31456]: I0312 21:09:05.271437 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.271470 master-0 kubenswrapper[31456]: I0312 21:09:05.271462 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f8467055-c9c9-4485-bb60-9a79e8b91268-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.271535 master-0 kubenswrapper[31456]: I0312 21:09:05.271484 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:05.271702 master-0 kubenswrapper[31456]: I0312 21:09:05.271671 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-snapshots\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:05.271738 master-0 kubenswrapper[31456]: I0312 21:09:05.271715 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcxd\" (UniqueName: \"kubernetes.io/projected/36bd483b-292e-4e82-99d6-daa612cd385a-kube-api-access-fmcxd\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.271767 master-0 kubenswrapper[31456]: I0312 21:09:05.271745 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-kubernetes\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.271767 master-0 kubenswrapper[31456]: I0312 21:09:05.271758 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-snapshots\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:05.271843 master-0 kubenswrapper[31456]: I0312 21:09:05.271764 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-systemd\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.271843 master-0 kubenswrapper[31456]: I0312 21:09:05.271835 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:05.271906 master-0 kubenswrapper[31456]: I0312 21:09:05.271855 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.271906 master-0 kubenswrapper[31456]: I0312 21:09:05.271871 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.271906 master-0 kubenswrapper[31456]: I0312 21:09:05.271887 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvxh\" (UniqueName: \"kubernetes.io/projected/a5d6705e-e564-4774-94b4-ef11956c67b2-kube-api-access-dkvxh\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:05.271906 master-0 kubenswrapper[31456]: I0312 21:09:05.271904 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-sys\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.272018 master-0 kubenswrapper[31456]: I0312 21:09:05.271946 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:05.272052 master-0 kubenswrapper[31456]: I0312 21:09:05.272032 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-sys\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.272081 master-0 kubenswrapper[31456]: I0312 21:09:05.272051 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 21:09:05.272150 master-0 kubenswrapper[31456]: I0312 21:09:05.272119 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.272183 master-0 kubenswrapper[31456]: I0312 21:09:05.272153 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:05.272183 master-0 kubenswrapper[31456]: I0312 21:09:05.272174 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmtk\" (UID: \"90f16d8c-25b6-4827-85d9-0995e4e1ab15\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 21:09:05.272238 master-0 kubenswrapper[31456]: I0312 21:09:05.272196 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:05.272238 master-0 kubenswrapper[31456]: I0312 21:09:05.272214 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfn6\" (UniqueName: \"kubernetes.io/projected/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-kube-api-access-2rfn6\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:05.272238 master-0 kubenswrapper[31456]: I0312 21:09:05.272235 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-metrics-certs\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.272336 master-0 kubenswrapper[31456]: I0312 21:09:05.272252 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-key\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 21:09:05.272336 master-0 kubenswrapper[31456]: I0312 21:09:05.272270 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 21:09:05.272336 master-0 kubenswrapper[31456]: I0312 21:09:05.272286 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-tmp\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.272336 master-0 kubenswrapper[31456]: I0312 21:09:05.272330 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:05.272508 master-0 kubenswrapper[31456]: I0312 21:09:05.272478 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-tmp\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.272585 master-0 kubenswrapper[31456]: I0312 21:09:05.272557 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-key\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 21:09:05.272704 master-0 kubenswrapper[31456]: I0312 21:09:05.272671 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 21:09:05.272748 master-0 kubenswrapper[31456]: I0312 21:09:05.272704 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9txs\" (UniqueName: \"kubernetes.io/projected/d9152bd6-f203-469b-97fa-db274e43b40c-kube-api-access-q9txs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:05.272748 master-0 kubenswrapper[31456]: I0312 21:09:05.272723 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:05.272748 master-0 kubenswrapper[31456]: I0312 21:09:05.272740 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 21:09:05.272855 master-0 kubenswrapper[31456]: I0312 21:09:05.272756 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2bmh\" (UniqueName: \"kubernetes.io/projected/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-kube-api-access-l2bmh\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 21:09:05.272855 master-0 kubenswrapper[31456]: I0312 21:09:05.272773 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 21:09:05.272855 master-0 kubenswrapper[31456]: I0312 21:09:05.272801 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hvwg\" (UniqueName: \"kubernetes.io/projected/ed1c4da2-564b-4354-a4ec-27b801098aa5-kube-api-access-2hvwg\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:05.272855 master-0 kubenswrapper[31456]: I0312 21:09:05.272843 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-apiservice-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272858 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272874 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272892 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272909 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272925 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272942 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:05.272969 master-0 kubenswrapper[31456]: I0312 21:09:05.272958 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.272977 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.272996 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273019 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhdz\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-kube-api-access-nhhdz\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273040 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273064 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273081 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhhz\" (UniqueName: \"kubernetes.io/projected/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-kube-api-access-qqhhz\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273098 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzn6t\" (UniqueName: \"kubernetes.io/projected/567a9a33-1a82-4c48-b541-7e0eaae11f57-kube-api-access-nzn6t\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273114 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273130 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273147 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:05.273166 master-0 kubenswrapper[31456]: I0312 21:09:05.273163 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.273530 master-0 kubenswrapper[31456]: I0312 21:09:05.273179 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 21:09:05.273953 master-0 kubenswrapper[31456]: I0312 21:09:05.273919 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-whereabouts-configmap\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.274081 master-0 kubenswrapper[31456]: I0312 21:09:05.274044 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/226cb3a1-984f-4410-96e6-c007131dc074-operand-assets\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 21:09:05.274132 master-0 kubenswrapper[31456]: I0312 21:09:05.274114 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.274169 master-0 kubenswrapper[31456]: I0312 21:09:05.274137 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-env-overrides\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.274169 master-0 kubenswrapper[31456]: I0312 21:09:05.274140 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-lib-modules\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.274228 master-0 kubenswrapper[31456]: I0312 21:09:05.274191 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xth7s\" (UniqueName: \"kubernetes.io/projected/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-kube-api-access-xth7s\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:09:05.274260 master-0 kubenswrapper[31456]: I0312 21:09:05.274243 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.274325 master-0 kubenswrapper[31456]: I0312 21:09:05.274274 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.274325 master-0 kubenswrapper[31456]: I0312 21:09:05.274304 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.274412 master-0 kubenswrapper[31456]: I0312 21:09:05.274370 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:05.274412 master-0 kubenswrapper[31456]: I0312 21:09:05.274396 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.274499 master-0 kubenswrapper[31456]: I0312 21:09:05.274441 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-catalog-content\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:05.274499 master-0 kubenswrapper[31456]: I0312 21:09:05.274468 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-client\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.274585 master-0 kubenswrapper[31456]: I0312 21:09:05.274556 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.274649 master-0 kubenswrapper[31456]: I0312 21:09:05.274603 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cni-binary-copy\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.274649 master-0 kubenswrapper[31456]: I0312 21:09:05.274583 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.274782 master-0 kubenswrapper[31456]: I0312 21:09:05.274652 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:05.274782 master-0 kubenswrapper[31456]: I0312 21:09:05.274682 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlt7h\" (UniqueName: \"kubernetes.io/projected/52839a08-0871-44d3-9d22-b2f6b4383b99-kube-api-access-hlt7h\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.274782 master-0 kubenswrapper[31456]: I0312 21:09:05.274727 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.276692 master-0 kubenswrapper[31456]: I0312 21:09:05.276658 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.276752 master-0 kubenswrapper[31456]: I0312 21:09:05.276694 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 21:09:05.276752 master-0 kubenswrapper[31456]: I0312 21:09:05.276714 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.276752 master-0 kubenswrapper[31456]: I0312 21:09:05.276730 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:05.276862 master-0 kubenswrapper[31456]: I0312 21:09:05.276777 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-tuned\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.276862 master-0 kubenswrapper[31456]: I0312 21:09:05.276797 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.276862 master-0 kubenswrapper[31456]: I0312 21:09:05.276831 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-default-certificate\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.276862 master-0 kubenswrapper[31456]: I0312 21:09:05.276851 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrm2z\" (UniqueName: \"kubernetes.io/projected/17d2bb40-74e2-4894-a884-7018952bdf71-kube-api-access-lrm2z\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:05.276976 master-0 kubenswrapper[31456]: I0312 21:09:05.276868 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-service-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:05.276976 master-0 kubenswrapper[31456]: I0312 21:09:05.276886 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbnbs\" (UniqueName: \"kubernetes.io/projected/32050f14-1939-41bf-a824-22016b90c189-kube-api-access-pbnbs\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 21:09:05.277087 master-0 kubenswrapper[31456]: I0312 21:09:05.277028 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-catalog-content\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:05.277199 master-0 kubenswrapper[31456]: I0312 21:09:05.277173 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-env-overrides\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.277287 master-0 kubenswrapper[31456]: I0312 21:09:05.277263 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-tuned\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.277345 master-0 kubenswrapper[31456]: I0312 21:09:05.277315 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.277393 master-0 kubenswrapper[31456]: I0312 21:09:05.277375 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.277424 master-0 kubenswrapper[31456]: I0312 21:09:05.277402 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.277452 master-0 kubenswrapper[31456]: I0312 21:09:05.277422 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.277452 master-0 kubenswrapper[31456]: I0312 21:09:05.277446 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-config-volume\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 21:09:05.277515 master-0 kubenswrapper[31456]: I0312 21:09:05.277466 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-node-exporter-wtmp\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.277515 master-0 kubenswrapper[31456]: I0312 21:09:05.277500 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.277719 master-0 kubenswrapper[31456]: I0312 21:09:05.277692 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-config\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.277762 master-0 kubenswrapper[31456]: I0312 21:09:05.277748 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3828a1d-8180-4c7b-b423-4488f7fc0b76-service-ca-bundle\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.277797 master-0 kubenswrapper[31456]: I0312 21:09:05.277782 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf33c432-db42-4c6d-8ee4-f089e5bf8203-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.277852 master-0 kubenswrapper[31456]: I0312 21:09:05.277799 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:05.277886 master-0 kubenswrapper[31456]: I0312 21:09:05.277865 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.277915 master-0 kubenswrapper[31456]: I0312 21:09:05.277884 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 21:09:05.277915 master-0 kubenswrapper[31456]: I0312 21:09:05.277902 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfspc\" (UniqueName: \"kubernetes.io/projected/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7-kube-api-access-mfspc\") pod \"csi-snapshot-controller-7577d6f48-8fk8w\" (UID: \"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 21:09:05.278171 master-0 kubenswrapper[31456]: I0312 21:09:05.277920 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-root\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.278228 master-0 kubenswrapper[31456]: I0312 21:09:05.278176 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.278228 master-0 kubenswrapper[31456]: I0312 21:09:05.278197 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.278228 master-0 kubenswrapper[31456]: I0312 21:09:05.278212 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/617f0f9c-50d5-4214-b30f-5110fd4399ec-iptables-alerter-script\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:05.278228 master-0 kubenswrapper[31456]: I0312 21:09:05.278091 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cf33c432-db42-4c6d-8ee4-f089e5bf8203-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278216 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit-dir\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278254 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-webhook-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278294 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278312 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278331 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278354 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.278370 master-0 kubenswrapper[31456]: I0312 21:09:05.278374 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278395 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278415 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278445 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp84p\" (UniqueName: \"kubernetes.io/projected/7667a111-e744-47b2-9603-3864347dc738-kube-api-access-mp84p\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278446 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278579 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-daemon-config\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278606 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278623 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-node-pullsecrets\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278641 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278642 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-config\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278657 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2ph\" (UniqueName: \"kubernetes.io/projected/da40e787-dd75-4f4f-b09e-a8dab590f260-kube-api-access-xg2ph\") pod \"migrator-57ccdf9b5-jd4pv\" (UID: \"da40e787-dd75-4f4f-b09e-a8dab590f260\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278674 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-trusted-ca-bundle\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278691 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-serving-ca\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.278750 master-0 kubenswrapper[31456]: I0312 21:09:05.278709 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-var-lib-kubelet\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279056 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-host\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279079 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279095 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7d5\" (UniqueName: \"kubernetes.io/projected/067fdca7-c61d-470c-8421-73e0b62df3e4-kube-api-access-tm7d5\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279111 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279131 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf28c\" (UniqueName: \"kubernetes.io/projected/a3828a1d-8180-4c7b-b423-4488f7fc0b76-kube-api-access-lf28c\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279155 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279172 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279187 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279204 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279220 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-catalog-content\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:05.279578 master-0 kubenswrapper[31456]: I0312 21:09:05.279235 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.279944 master-0 kubenswrapper[31456]: I0312 21:09:05.278874 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.279944 master-0 kubenswrapper[31456]: I0312 21:09:05.279024 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 21:09:05.280109 master-0 kubenswrapper[31456]: I0312 21:09:05.280089 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-metrics-tls\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:05.280176 master-0 kubenswrapper[31456]: I0312 21:09:05.280160 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-catalog-content\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:05.280319 master-0 kubenswrapper[31456]: I0312 21:09:05.280301 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-binary-copy\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.280496 master-0 kubenswrapper[31456]: I0312 21:09:05.280479 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3bebf49-1d92-4353-b84c-91ed86b7bb94-serving-cert\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.280533 master-0 kubenswrapper[31456]: I0312 21:09:05.280513 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:05.280609 master-0 kubenswrapper[31456]: I0312 21:09:05.280536 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.280609 master-0 kubenswrapper[31456]: I0312 21:09:05.280569 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 21:09:05.280609 master-0 kubenswrapper[31456]: I0312 21:09:05.280586 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.280609 master-0 kubenswrapper[31456]: I0312 21:09:05.280602 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-client\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280619 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280638 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-conf\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280669 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280689 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280706 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-image-import-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280725 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-client\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280740 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280757 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280777 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280800 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280842 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280863 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-utilities\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280879 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280895 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b96dd10-18a0-49f8-b488-63fc2b23da39-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280911 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280928 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280945 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280962 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280980 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp4mt\" (UniqueName: \"kubernetes.io/projected/f8467055-c9c9-4485-bb60-9a79e8b91268-kube-api-access-gp4mt\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.280992 master-0 kubenswrapper[31456]: I0312 21:09:05.280998 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-stats-auth\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281015 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281033 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281051 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281066 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281082 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281099 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281115 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7667a111-e744-47b2-9603-3864347dc738-node-exporter-textfile\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281132 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281150 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-catalog-content\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281167 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsprq\" (UniqueName: \"kubernetes.io/projected/135ec6f3-fbc0-4840-a4b1-c1124c705161-kube-api-access-wsprq\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281182 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281198 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281218 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281235 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281252 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281268 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqbt\" (UniqueName: \"kubernetes.io/projected/cc7b96ab-01af-442a-8eda-fc59e665a367-kube-api-access-vwqbt\") pod \"network-check-source-7c67b67d47-bv4x6\" (UID: \"cc7b96ab-01af-442a-8eda-fc59e665a367\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281284 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281301 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281318 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281337 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281352 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d9152bd6-f203-469b-97fa-db274e43b40c-rootfs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281369 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrzs\" (UniqueName: \"kubernetes.io/projected/b71376ea-e248-48fc-b2c4-1de7236ddd31-kube-api-access-nlrzs\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281385 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281401 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281419 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281437 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281453 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281470 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281487 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysconfig\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281502 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281519 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-utilities\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281535 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281567 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281587 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gg7v\" (UniqueName: \"kubernetes.io/projected/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-api-access-7gg7v\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281604 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cf33c432-db42-4c6d-8ee4-f089e5bf8203-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281620 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281639 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:05.281613 master-0 kubenswrapper[31456]: I0312 21:09:05.281656 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.281671 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.281689 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.281704 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.281721 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83368183-0368-44b1-9387-eed32b211988-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.281737 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-run\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.281827 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-catalog-content\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282154 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282246 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282249 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b96dd10-18a0-49f8-b488-63fc2b23da39-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282371 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/980191fe-c62c-4b9e-879c-38fa8ce0a58b-available-featuregates\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282506 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282688 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8660437-633f-4132-8a61-fe998abb493e-metrics-certs\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282913 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.282954 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3daeefa-7842-464c-a6c9-01b44ebea477-ovn-node-metrics-cert\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283006 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-utilities\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283076 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6eace9f-a52d-4570-a932-959538e1f2bc-utilities\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283230 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07542516-49c8-4e20-9b97-798fbff850a5-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283260 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-etcd-ca\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283456 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283517 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7667a111-e744-47b2-9603-3864347dc738-node-exporter-textfile\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283520 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784599a3-a2ac-46ac-a4b7-9439704646cc-serving-cert\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283556 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283623 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283729 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/426efd5c-69e1-43e5-835a-6e1c4ef85720-webhook-cert\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283795 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283834 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283855 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 21:09:05.283843 master-0 kubenswrapper[31456]: I0312 21:09:05.283873 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.283899 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.283917 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.283935 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.283953 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hp5\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-kube-api-access-x8hp5\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.283972 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.283991 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284012 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284029 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284049 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284067 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-utilities\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284084 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284103 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-serving-cert\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284125 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284142 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-serving-cert\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284160 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284202 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83368183-0368-44b1-9387-eed32b211988-service-ca\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284220 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-encryption-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284240 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284259 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284279 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284300 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284320 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-catalog-content\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284337 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284354 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/067fdca7-c61d-470c-8421-73e0b62df3e4-tmpfs\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284371 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284390 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-utilities\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284407 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284446 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqfz\" (UniqueName: \"kubernetes.io/projected/4c589179-0df4-4fe8-bfdd-965c3e7652c5-kube-api-access-pbqfz\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284466 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt627\" (UniqueName: \"kubernetes.io/projected/400a13b5-c489-4beb-af33-94e635b86148-kube-api-access-vt627\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284487 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284505 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284524 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpf99\" (UniqueName: \"kubernetes.io/projected/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-kube-api-access-tpf99\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284541 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284561 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284579 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284595 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284614 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284633 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 21:09:05.284621 master-0 kubenswrapper[31456]: I0312 21:09:05.284651 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284695 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284715 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284734 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284752 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284770 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284787 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-modprobe-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284819 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l2sm\" (UniqueName: \"kubernetes.io/projected/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-kube-api-access-4l2sm\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284838 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284856 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284875 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8qp\" (UniqueName: \"kubernetes.io/projected/d6eace9f-a52d-4570-a932-959538e1f2bc-kube-api-access-8l8qp\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284893 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-hosts-file\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284914 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8745n\" (UniqueName: \"kubernetes.io/projected/7f3afe47-c537-420c-b5be-1cad612e119d-kube-api-access-8745n\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284931 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-audit-policies\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284951 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284969 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xxkr\" (UniqueName: \"kubernetes.io/projected/05fd1378-3935-4caf-96c5-17cf7e29417f-kube-api-access-8xxkr\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.284987 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285005 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285024 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285046 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jzt\" (UniqueName: \"kubernetes.io/projected/508cb83e-6f25-4235-8c56-b25b762ebcad-kube-api-access-s4jzt\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285064 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285081 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285100 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83368183-0368-44b1-9387-eed32b211988-serving-cert\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285117 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285136 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285152 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285170 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-encryption-config\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285191 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285208 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285226 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285243 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285262 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddw4\" (UniqueName: \"kubernetes.io/projected/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-kube-api-access-8ddw4\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285279 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285296 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285315 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285335 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285353 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285372 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285393 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-trusted-ca-bundle\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285410 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmzz\" (UniqueName: \"kubernetes.io/projected/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-kube-api-access-vcmzz\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285428 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285445 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285464 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n555w\" (UniqueName: \"kubernetes.io/projected/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-kube-api-access-n555w\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285483 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-serving-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285500 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36bd483b-292e-4e82-99d6-daa612cd385a-audit-dir\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285522 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285541 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285560 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285580 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285598 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285615 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285637 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5m2\" (UniqueName: \"kubernetes.io/projected/b7229c42-b6bc-4ea9-946c-71a4117f53e9-kube-api-access-xx5m2\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.285658 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286275 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-serving-cert\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286333 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3bebf49-1d92-4353-b84c-91ed86b7bb94-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286523 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07542516-49c8-4e20-9b97-798fbff850a5-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286528 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/426efd5c-69e1-43e5-835a-6e1c4ef85720-ovnkube-identity-cm\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286560 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286581 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286600 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286620 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286640 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286671 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286678 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96bd86df-2101-47f5-844b-1332261c66f1-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286691 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286713 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286732 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286751 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286769 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286790 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286819 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286824 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286846 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286867 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286888 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286910 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286928 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.286973 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-metrics-tls\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287139 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567a9a33-1a82-4c48-b541-7e0eaae11f57-utilities\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287217 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287292 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/54184647-6e9a-43f7-90b1-5d8815f8b1ab-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287522 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/135ec6f3-fbc0-4840-a4b1-c1124c705161-signing-cabundle\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287521 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5471994f-769e-4124-b7d0-01f5358fc18f-config\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287548 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/98d99166-c42a-4169-87e8-4209570aec50-srv-cert\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287617 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e624e623-6d59-444d-b548-165fa5fd2581-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287717 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/07330030-487d-4fa6-b5c3-67607355bbba-srv-cert\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287737 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7229c42-b6bc-4ea9-946c-71a4117f53e9-catalog-content\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287923 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15ebfbd8-0782-431a-88a3-83af328498d2-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287968 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.287991 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.288140 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-config\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.288165 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.288429 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/02649264-040a-41a6-9a41-8bf6416c68ff-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.288486 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3daeefa-7842-464c-a6c9-01b44ebea477-ovnkube-script-lib\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.288720 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/980191fe-c62c-4b9e-879c-38fa8ce0a58b-serving-cert\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.288978 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b71f537-1cc2-4645-8e50-23941635457c-trusted-ca\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289166 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/067fdca7-c61d-470c-8421-73e0b62df3e4-tmpfs\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289181 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d862a346-ec4d-46f6-a3e2-ea8759ea0111-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289328 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b71f537-1cc2-4645-8e50-23941635457c-metrics-tls\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289469 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784599a3-a2ac-46ac-a4b7-9439704646cc-config\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289528 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebfbd8-0782-431a-88a3-83af328498d2-config\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289581 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c589179-0df4-4fe8-bfdd-965c3e7652c5-utilities\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289632 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/02649264-040a-41a6-9a41-8bf6416c68ff-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289860 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-config\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289899 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 21:09:05.289905 master-0 kubenswrapper[31456]: I0312 21:09:05.289904 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/900228dd-2d21-4759-87da-b027b0134ad8-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:05.293607 master-0 kubenswrapper[31456]: I0312 21:09:05.290187 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5471994f-769e-4124-b7d0-01f5358fc18f-serving-cert\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:05.309143 master-0 kubenswrapper[31456]: I0312 21:09:05.309099 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 21:09:05.313585 master-0 kubenswrapper[31456]: I0312 21:09:05.313556 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-client\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.328908 master-0 kubenswrapper[31456]: I0312 21:09:05.328878 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 21:09:05.338569 master-0 kubenswrapper[31456]: I0312 21:09:05.338535 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-serving-cert\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.348533 master-0 kubenswrapper[31456]: I0312 21:09:05.348495 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 21:09:05.358099 master-0 kubenswrapper[31456]: I0312 21:09:05.358054 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-encryption-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.369582 master-0 kubenswrapper[31456]: I0312 21:09:05.369551 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 21:09:05.374860 master-0 kubenswrapper[31456]: I0312 21:09:05.374777 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-config\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.388673 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.388778 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysconfig\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.388793 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.388867 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.388901 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-etc-kubernetes\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.388991 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysconfig\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389042 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389103 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-run\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389271 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389306 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389448 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389538 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389573 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389608 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389642 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389743 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-modprobe-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389790 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389866 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-hosts-file\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.389952 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390015 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390052 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390093 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-bin\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390130 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-etc-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390201 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-cnibin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390254 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390279 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390309 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390345 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390349 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390394 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-host-etc-kube\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390428 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390441 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-run\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390477 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-var-lib-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390512 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36bd483b-292e-4e82-99d6-daa612cd385a-audit-dir\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390593 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-modprobe-d\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390614 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390647 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-ovn\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390653 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390676 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-openvswitch\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390727 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-multus\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390755 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390795 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-os-release\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390824 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f8467055-c9c9-4485-bb60-9a79e8b91268-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390858 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f8467055-c9c9-4485-bb60-9a79e8b91268-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390892 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-kubernetes\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390899 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-hostroot\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390930 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-netns\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390933 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-systemd\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390957 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-cni-netd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390968 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.390987 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36bd483b-292e-4e82-99d6-daa612cd385a-audit-dir\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391002 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391020 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-systemd-units\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391046 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-sys\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391051 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-system-cni-dir\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391081 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-kubernetes\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391132 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-hosts-file\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391130 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-systemd\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.391092 master-0 kubenswrapper[31456]: I0312 21:09:05.391164 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-sys\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391177 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-conf-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391226 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391225 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391265 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-kubelet\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391292 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-sys\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391302 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-netns\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391322 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-run-systemd\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391368 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-sys\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391268 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-slash\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391528 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391581 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391693 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391761 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391862 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-lib-modules\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391909 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.391996 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392033 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392080 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392113 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392190 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392225 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392417 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-var-lib-cni-bin\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392420 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392454 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-node-log\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392538 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392577 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-system-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392585 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-lib-modules\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392614 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/617f0f9c-50d5-4214-b30f-5110fd4399ec-host-slash\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392622 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-run-ovn-kubernetes\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392642 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-log-socket\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392655 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3daeefa-7842-464c-a6c9-01b44ebea477-host-kubelet\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392714 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392725 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8b96dd10-18a0-49f8-b488-63fc2b23da39-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392269 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.392752 master-0 kubenswrapper[31456]: I0312 21:09:05.392785 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-cnibin\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.392924 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-node-exporter-wtmp\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393016 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit-dir\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393042 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-node-exporter-wtmp\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393062 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-root\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393083 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit-dir\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393102 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393112 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7667a111-e744-47b2-9603-3864347dc738-root\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393187 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-node-pullsecrets\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393259 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-var-lib-kubelet\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393299 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-node-pullsecrets\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393328 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-host\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393346 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-var-lib-kubelet\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393387 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-host\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393484 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393551 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-conf\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.393684 master-0 kubenswrapper[31456]: I0312 21:09:05.393494 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.393696 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.393732 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.393753 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/52839a08-0871-44d3-9d22-b2f6b4383b99-etc-sysctl-conf\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.393824 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-os-release\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.393871 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-multus-certs\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.394174 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 21:09:05.394256 master-0 kubenswrapper[31456]: I0312 21:09:05.394168 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394439 master-0 kubenswrapper[31456]: I0312 21:09:05.394244 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-socket-dir-parent\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394439 master-0 kubenswrapper[31456]: I0312 21:09:05.394289 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394439 master-0 kubenswrapper[31456]: I0312 21:09:05.394312 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-host-run-k8s-cni-cncf-io\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394523 master-0 kubenswrapper[31456]: I0312 21:09:05.394401 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.394523 master-0 kubenswrapper[31456]: I0312 21:09:05.394496 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394580 master-0 kubenswrapper[31456]: I0312 21:09:05.394538 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/cf33c432-db42-4c6d-8ee4-f089e5bf8203-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.394580 master-0 kubenswrapper[31456]: I0312 21:09:05.394554 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.394638 master-0 kubenswrapper[31456]: I0312 21:09:05.394590 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-multus-cni-dir\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:05.394728 master-0 kubenswrapper[31456]: I0312 21:09:05.394635 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d9152bd6-f203-469b-97fa-db274e43b40c-rootfs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:05.394728 master-0 kubenswrapper[31456]: I0312 21:09:05.394665 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/83368183-0368-44b1-9387-eed32b211988-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:05.394782 master-0 kubenswrapper[31456]: I0312 21:09:05.394747 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d9152bd6-f203-469b-97fa-db274e43b40c-rootfs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:05.415839 master-0 kubenswrapper[31456]: I0312 21:09:05.404388 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-audit\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.415839 master-0 kubenswrapper[31456]: I0312 21:09:05.415777 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 21:09:05.421596 master-0 kubenswrapper[31456]: I0312 21:09:05.421542 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-etcd-serving-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.430129 master-0 kubenswrapper[31456]: I0312 21:09:05.430077 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 21:09:05.434587 master-0 kubenswrapper[31456]: I0312 21:09:05.434543 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-image-import-ca\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.462715 master-0 kubenswrapper[31456]: I0312 21:09:05.462665 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 21:09:05.468668 master-0 kubenswrapper[31456]: I0312 21:09:05.468578 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 21:09:05.472807 master-0 kubenswrapper[31456]: I0312 21:09:05.472761 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-trusted-ca-bundle\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:05.475655 master-0 kubenswrapper[31456]: I0312 21:09:05.475631 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 12 21:09:05.477315 master-0 kubenswrapper[31456]: I0312 21:09:05.477279 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="1867cbd1eea641a204f5d8db13d19bc48d06f54cf7a7cbc0d8d91fbb925b3a69" exitCode=255 Mar 12 21:09:05.477429 master-0 kubenswrapper[31456]: I0312 21:09:05.477415 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:05.477554 master-0 kubenswrapper[31456]: I0312 21:09:05.477539 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.482928 master-0 kubenswrapper[31456]: E0312 21:09:05.482895 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 12 21:09:05.486034 master-0 kubenswrapper[31456]: I0312 21:09:05.486007 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:05.488533 master-0 kubenswrapper[31456]: I0312 21:09:05.488508 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 21:09:05.509110 master-0 kubenswrapper[31456]: I0312 21:09:05.509074 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 21:09:05.513665 master-0 kubenswrapper[31456]: I0312 21:09:05.513632 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-client\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.529023 master-0 kubenswrapper[31456]: I0312 21:09:05.528994 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 21:09:05.539281 master-0 kubenswrapper[31456]: I0312 21:09:05.539236 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-serving-cert\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.549259 master-0 kubenswrapper[31456]: I0312 21:09:05.549237 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 21:09:05.558419 master-0 kubenswrapper[31456]: I0312 21:09:05.558375 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/36bd483b-292e-4e82-99d6-daa612cd385a-encryption-config\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.569090 master-0 kubenswrapper[31456]: I0312 21:09:05.569047 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 21:09:05.578325 master-0 kubenswrapper[31456]: I0312 21:09:05.578280 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-audit-policies\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.588698 master-0 kubenswrapper[31456]: I0312 21:09:05.588644 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 21:09:05.590097 master-0 kubenswrapper[31456]: I0312 21:09:05.590058 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-etcd-serving-ca\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.598126 master-0 kubenswrapper[31456]: I0312 21:09:05.598090 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") pod \"222b53b1-7e5c-49d5-9795-fec4d0547398\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " Mar 12 21:09:05.598229 master-0 kubenswrapper[31456]: I0312 21:09:05.598148 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock" (OuterVolumeSpecName: "var-lock") pod "222b53b1-7e5c-49d5-9795-fec4d0547398" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:05.598287 master-0 kubenswrapper[31456]: I0312 21:09:05.598264 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") pod \"222b53b1-7e5c-49d5-9795-fec4d0547398\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " Mar 12 21:09:05.598383 master-0 kubenswrapper[31456]: I0312 21:09:05.598363 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "222b53b1-7e5c-49d5-9795-fec4d0547398" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:05.599772 master-0 kubenswrapper[31456]: I0312 21:09:05.599747 31456 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:05.599772 master-0 kubenswrapper[31456]: I0312 21:09:05.599768 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/222b53b1-7e5c-49d5-9795-fec4d0547398-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:05.609328 master-0 kubenswrapper[31456]: I0312 21:09:05.609252 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 21:09:05.618301 master-0 kubenswrapper[31456]: I0312 21:09:05.618266 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36bd483b-292e-4e82-99d6-daa612cd385a-trusted-ca-bundle\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:05.628366 master-0 kubenswrapper[31456]: I0312 21:09:05.628289 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 21:09:05.649349 master-0 kubenswrapper[31456]: I0312 21:09:05.649298 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 21:09:05.658646 master-0 kubenswrapper[31456]: I0312 21:09:05.658616 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/226cb3a1-984f-4410-96e6-c007131dc074-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 21:09:05.670131 master-0 kubenswrapper[31456]: I0312 21:09:05.670077 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 21:09:05.678651 master-0 kubenswrapper[31456]: I0312 21:09:05.678600 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-config-volume\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 21:09:05.689503 master-0 kubenswrapper[31456]: I0312 21:09:05.689460 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 21:09:05.699906 master-0 kubenswrapper[31456]: I0312 21:09:05.699869 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-metrics-tls\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 21:09:05.709414 master-0 kubenswrapper[31456]: I0312 21:09:05.709360 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 21:09:05.729186 master-0 kubenswrapper[31456]: I0312 21:09:05.729133 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 21:09:05.734519 master-0 kubenswrapper[31456]: I0312 21:09:05.734476 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/cf33c432-db42-4c6d-8ee4-f089e5bf8203-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:05.748990 master-0 kubenswrapper[31456]: I0312 21:09:05.748953 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 21:09:05.753967 master-0 kubenswrapper[31456]: I0312 21:09:05.753909 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96bd86df-2101-47f5-844b-1332261c66f1-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 21:09:05.777381 master-0 kubenswrapper[31456]: I0312 21:09:05.777331 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 21:09:05.782781 master-0 kubenswrapper[31456]: I0312 21:09:05.782739 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/900228dd-2d21-4759-87da-b027b0134ad8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:05.791299 master-0 kubenswrapper[31456]: I0312 21:09:05.791257 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 21:09:05.813854 master-0 kubenswrapper[31456]: I0312 21:09:05.813304 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 21:09:05.831949 master-0 kubenswrapper[31456]: I0312 21:09:05.831895 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:09:05.850141 master-0 kubenswrapper[31456]: I0312 21:09:05.849989 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 21:09:05.870169 master-0 kubenswrapper[31456]: I0312 21:09:05.870089 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 21:09:05.873458 master-0 kubenswrapper[31456]: I0312 21:09:05.872753 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-metrics-certs\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.894062 master-0 kubenswrapper[31456]: I0312 21:09:05.893934 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 21:09:05.909652 master-0 kubenswrapper[31456]: I0312 21:09:05.909580 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 21:09:05.919522 master-0 kubenswrapper[31456]: I0312 21:09:05.919471 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3828a1d-8180-4c7b-b423-4488f7fc0b76-service-ca-bundle\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.928970 master-0 kubenswrapper[31456]: I0312 21:09:05.928926 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 21:09:05.938286 master-0 kubenswrapper[31456]: I0312 21:09:05.937540 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-default-certificate\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:05.949164 master-0 kubenswrapper[31456]: I0312 21:09:05.949103 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 21:09:05.980791 master-0 kubenswrapper[31456]: I0312 21:09:05.980746 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 21:09:05.988875 master-0 kubenswrapper[31456]: I0312 21:09:05.988800 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:05.989940 master-0 kubenswrapper[31456]: I0312 21:09:05.989907 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 21:09:06.009181 master-0 kubenswrapper[31456]: I0312 21:09:06.009129 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 21:09:06.029278 master-0 kubenswrapper[31456]: I0312 21:09:06.029237 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 21:09:06.035060 master-0 kubenswrapper[31456]: I0312 21:09:06.035028 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a3828a1d-8180-4c7b-b423-4488f7fc0b76-stats-auth\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:06.048976 master-0 kubenswrapper[31456]: I0312 21:09:06.048913 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 21:09:06.069103 master-0 kubenswrapper[31456]: I0312 21:09:06.069045 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 21:09:06.077953 master-0 kubenswrapper[31456]: I0312 21:09:06.077911 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83368183-0368-44b1-9387-eed32b211988-serving-cert\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:06.089301 master-0 kubenswrapper[31456]: I0312 21:09:06.089209 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 21:09:06.098144 master-0 kubenswrapper[31456]: I0312 21:09:06.098101 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83368183-0368-44b1-9387-eed32b211988-service-ca\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:06.108931 master-0 kubenswrapper[31456]: I0312 21:09:06.108887 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 21:09:06.115439 master-0 kubenswrapper[31456]: I0312 21:09:06.115379 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-apiservice-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:06.119301 master-0 kubenswrapper[31456]: I0312 21:09:06.119188 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/067fdca7-c61d-470c-8421-73e0b62df3e4-webhook-cert\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:06.129019 master-0 kubenswrapper[31456]: I0312 21:09:06.128973 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 21:09:06.150422 master-0 kubenswrapper[31456]: I0312 21:09:06.150332 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-t5dxh" Mar 12 21:09:06.174787 master-0 kubenswrapper[31456]: I0312 21:09:06.173838 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 21:09:06.189074 master-0 kubenswrapper[31456]: I0312 21:09:06.189018 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-v7qw9" Mar 12 21:09:06.209801 master-0 kubenswrapper[31456]: I0312 21:09:06.209758 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 21:09:06.219341 master-0 kubenswrapper[31456]: I0312 21:09:06.219278 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 21:09:06.223777 master-0 kubenswrapper[31456]: I0312 21:09:06.223544 31456 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 21:09:06.229536 master-0 kubenswrapper[31456]: I0312 21:09:06.226920 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 12 21:09:06.229536 master-0 kubenswrapper[31456]: I0312 21:09:06.226960 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 12 21:09:06.229536 master-0 kubenswrapper[31456]: I0312 21:09:06.226974 31456 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 12 21:09:06.229536 master-0 kubenswrapper[31456]: I0312 21:09:06.227344 31456 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 12 21:09:06.229536 master-0 kubenswrapper[31456]: I0312 21:09:06.227477 31456 request.go:700] Waited for 1.012590255s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0 Mar 12 21:09:06.229536 master-0 kubenswrapper[31456]: I0312 21:09:06.229194 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 21:09:06.238998 master-0 kubenswrapper[31456]: I0312 21:09:06.238953 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-service-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:06.249917 master-0 kubenswrapper[31456]: I0312 21:09:06.249876 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-7875j" Mar 12 21:09:06.268925 master-0 kubenswrapper[31456]: I0312 21:09:06.268882 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-cdrqx" Mar 12 21:09:06.272089 master-0 kubenswrapper[31456]: E0312 21:09:06.272052 31456 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.272153 master-0 kubenswrapper[31456]: E0312 21:09:06.272133 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.772111049 +0000 UTC m=+7.846716377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.272288 master-0 kubenswrapper[31456]: E0312 21:09:06.272250 31456 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.272355 master-0 kubenswrapper[31456]: E0312 21:09:06.272335 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.772316413 +0000 UTC m=+7.846921771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.272417 master-0 kubenswrapper[31456]: E0312 21:09:06.272394 31456 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.272474 master-0 kubenswrapper[31456]: E0312 21:09:06.272461 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates podName:90f16d8c-25b6-4827-85d9-0995e4e1ab15 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.772447636 +0000 UTC m=+7.847052984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates") pod "prometheus-operator-admission-webhook-8464df8497-dfmtk" (UID: "90f16d8c-25b6-4827-85d9-0995e4e1ab15") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.273208 master-0 kubenswrapper[31456]: E0312 21:09:06.273169 31456 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.273303 master-0 kubenswrapper[31456]: E0312 21:09:06.273269 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert podName:05fd1378-3935-4caf-96c5-17cf7e29417f nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.773244266 +0000 UTC m=+7.847849604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-j79ht" (UID: "05fd1378-3935-4caf-96c5-17cf7e29417f") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.273570 master-0 kubenswrapper[31456]: E0312 21:09:06.273545 31456 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.273630 master-0 kubenswrapper[31456]: E0312 21:09:06.273608 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert podName:b71376ea-e248-48fc-b2c4-1de7236ddd31 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.773591334 +0000 UTC m=+7.848196672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert") pod "cluster-autoscaler-operator-69576476f7-r6rcq" (UID: "b71376ea-e248-48fc-b2c4-1de7236ddd31") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.274683 master-0 kubenswrapper[31456]: E0312 21:09:06.274654 31456 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274749 master-0 kubenswrapper[31456]: E0312 21:09:06.274709 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config podName:d850d441-7505-4e81-b4cf-6e7a9911ae35 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774696341 +0000 UTC m=+7.849301689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config") pod "route-controller-manager-8467b998d8-l9fvg" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274749 master-0 kubenswrapper[31456]: E0312 21:09:06.274714 31456 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274749 master-0 kubenswrapper[31456]: E0312 21:09:06.274735 31456 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274769 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images podName:f8467055-c9c9-4485-bb60-9a79e8b91268 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774757053 +0000 UTC m=+7.849362391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" (UID: "f8467055-c9c9-4485-bb60-9a79e8b91268") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274783 31456 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274791 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config podName:ed1c4da2-564b-4354-a4ec-27b801098aa5 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774779793 +0000 UTC m=+7.849385131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-bdmlf" (UID: "ed1c4da2-564b-4354-a4ec-27b801098aa5") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274824 31456 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274864 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs podName:cf33c432-db42-4c6d-8ee4-f089e5bf8203 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774851795 +0000 UTC m=+7.849457133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs") pod "catalogd-controller-manager-7f8b8b6f4c-zgjqw" (UID: "cf33c432-db42-4c6d-8ee4-f089e5bf8203") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274866 31456 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274886 31456 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.274905 master-0 kubenswrapper[31456]: E0312 21:09:06.274901 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config podName:b71376ea-e248-48fc-b2c4-1de7236ddd31 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774893046 +0000 UTC m=+7.849498384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-r6rcq" (UID: "b71376ea-e248-48fc-b2c4-1de7236ddd31") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.275218 master-0 kubenswrapper[31456]: E0312 21:09:06.274917 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config podName:508cb83e-6f25-4235-8c56-b25b762ebcad nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774908346 +0000 UTC m=+7.849513694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-7p8w8" (UID: "508cb83e-6f25-4235-8c56-b25b762ebcad") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.275218 master-0 kubenswrapper[31456]: E0312 21:09:06.274934 31456 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.275218 master-0 kubenswrapper[31456]: E0312 21:09:06.274936 31456 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.275218 master-0 kubenswrapper[31456]: E0312 21:09:06.274970 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token podName:a5d6705e-e564-4774-94b4-ef11956c67b2 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774960767 +0000 UTC m=+7.849566245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token") pod "machine-config-server-mz2sr" (UID: "a5d6705e-e564-4774-94b4-ef11956c67b2") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.275218 master-0 kubenswrapper[31456]: E0312 21:09:06.274993 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls podName:17d2bb40-74e2-4894-a884-7018952bdf71 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.774982348 +0000 UTC m=+7.849587826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-fnxjc" (UID: "17d2bb40-74e2-4894-a884-7018952bdf71") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.277204 master-0 kubenswrapper[31456]: E0312 21:09:06.277169 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.277264 master-0 kubenswrapper[31456]: E0312 21:09:06.277227 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca podName:ed1c4da2-564b-4354-a4ec-27b801098aa5 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.777214472 +0000 UTC m=+7.851819810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-bdmlf" (UID: "ed1c4da2-564b-4354-a4ec-27b801098aa5") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.278432 master-0 kubenswrapper[31456]: E0312 21:09:06.278401 31456 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.278499 master-0 kubenswrapper[31456]: E0312 21:09:06.278467 31456 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.278562 master-0 kubenswrapper[31456]: E0312 21:09:06.278469 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert podName:7f3afe47-c537-420c-b5be-1cad612e119d nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.778454832 +0000 UTC m=+7.853060180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-ftxzs" (UID: "7f3afe47-c537-420c-b5be-1cad612e119d") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.278605 master-0 kubenswrapper[31456]: E0312 21:09:06.278569 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls podName:4ebc9ee1-3913-4112-bb3f-c79f2c08032b nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.778550624 +0000 UTC m=+7.853155982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-4tfmr" (UID: "4ebc9ee1-3913-4112-bb3f-c79f2c08032b") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.279555 master-0 kubenswrapper[31456]: E0312 21:09:06.278636 31456 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.279555 master-0 kubenswrapper[31456]: E0312 21:09:06.278684 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config podName:4ebc9ee1-3913-4112-bb3f-c79f2c08032b nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.778673377 +0000 UTC m=+7.853278715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-4tfmr" (UID: "4ebc9ee1-3913-4112-bb3f-c79f2c08032b") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.279661 master-0 kubenswrapper[31456]: E0312 21:09:06.279550 31456 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.279661 master-0 kubenswrapper[31456]: E0312 21:09:06.279637 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config podName:17d2bb40-74e2-4894-a884-7018952bdf71 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.779625551 +0000 UTC m=+7.854230889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config") pod "cluster-baremetal-operator-5cdb4c5598-fnxjc" (UID: "17d2bb40-74e2-4894-a884-7018952bdf71") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.279661 master-0 kubenswrapper[31456]: E0312 21:09:06.279659 31456 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.279762 master-0 kubenswrapper[31456]: E0312 21:09:06.279730 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls podName:32050f14-1939-41bf-a824-22016b90c189 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.779718873 +0000 UTC m=+7.854324211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-wjpf9" (UID: "32050f14-1939-41bf-a824-22016b90c189") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.279966 master-0 kubenswrapper[31456]: E0312 21:09:06.279930 31456 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-4jamj9cd05on6: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.280026 master-0 kubenswrapper[31456]: E0312 21:09:06.279965 31456 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.280026 master-0 kubenswrapper[31456]: E0312 21:09:06.280013 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.779997239 +0000 UTC m=+7.854602727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.280112 master-0 kubenswrapper[31456]: E0312 21:09:06.280040 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config podName:67e68ff0-f54d-4973-bbe7-ed43ce542bc0 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.78002886 +0000 UTC m=+7.854634328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config") pod "machine-api-operator-84bf6db4f9-sh67s" (UID: "67e68ff0-f54d-4973-bbe7-ed43ce542bc0") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.280112 master-0 kubenswrapper[31456]: E0312 21:09:06.279940 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.280112 master-0 kubenswrapper[31456]: E0312 21:09:06.280097 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca podName:ea339fe1-c013-4c4b-90c9-aaaa7eb40d99 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.780086032 +0000 UTC m=+7.854691470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca") pod "prometheus-operator-5ff8674d55-8fpdl" (UID: "ea339fe1-c013-4c4b-90c9-aaaa7eb40d99") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.282059 master-0 kubenswrapper[31456]: E0312 21:09:06.282032 31456 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.282117 master-0 kubenswrapper[31456]: E0312 21:09:06.282095 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config podName:ea339fe1-c013-4c4b-90c9-aaaa7eb40d99 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.782081389 +0000 UTC m=+7.856686727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-8fpdl" (UID: "ea339fe1-c013-4c4b-90c9-aaaa7eb40d99") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283173 master-0 kubenswrapper[31456]: E0312 21:09:06.283126 31456 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283248 master-0 kubenswrapper[31456]: E0312 21:09:06.283198 31456 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283248 master-0 kubenswrapper[31456]: E0312 21:09:06.283232 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert podName:a539e1c7-3799-4d43-8f2f-d5e5c0ffd918 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783218797 +0000 UTC m=+7.857824125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert") pod "ingress-canary-67vs7" (UID: "a539e1c7-3799-4d43-8f2f-d5e5c0ffd918") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283253 31456 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283263 31456 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283279 31456 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283285 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls podName:508cb83e-6f25-4235-8c56-b25b762ebcad nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783249368 +0000 UTC m=+7.857854806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls") pod "machine-config-operator-fdb5c78b5-7p8w8" (UID: "508cb83e-6f25-4235-8c56-b25b762ebcad") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283301 31456 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283326 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls podName:f8467055-c9c9-4485-bb60-9a79e8b91268 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783311329 +0000 UTC m=+7.857916687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" (UID: "f8467055-c9c9-4485-bb60-9a79e8b91268") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283348 master-0 kubenswrapper[31456]: E0312 21:09:06.283184 31456 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283333 31456 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283366 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config podName:400a13b5-c489-4beb-af33-94e635b86148 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.78335378 +0000 UTC m=+7.857959128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config") pod "machine-approver-754bdc9f9d-hj9bb" (UID: "400a13b5-c489-4beb-af33-94e635b86148") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283390 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config podName:400a13b5-c489-4beb-af33-94e635b86148 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783375611 +0000 UTC m=+7.857981019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config") pod "machine-approver-754bdc9f9d-hj9bb" (UID: "400a13b5-c489-4beb-af33-94e635b86148") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283403 31456 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283410 31456 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283419 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783407711 +0000 UTC m=+7.858013149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283421 31456 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283445 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls podName:400a13b5-c489-4beb-af33-94e635b86148 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783434442 +0000 UTC m=+7.858039850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls") pod "machine-approver-754bdc9f9d-hj9bb" (UID: "400a13b5-c489-4beb-af33-94e635b86148") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283466 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca podName:d850d441-7505-4e81-b4cf-6e7a9911ae35 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783456363 +0000 UTC m=+7.858061811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca") pod "route-controller-manager-8467b998d8-l9fvg" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283489 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images podName:67e68ff0-f54d-4973-bbe7-ed43ce542bc0 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783478273 +0000 UTC m=+7.858083711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images") pod "machine-api-operator-84bf6db4f9-sh67s" (UID: "67e68ff0-f54d-4973-bbe7-ed43ce542bc0") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283507 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert podName:d850d441-7505-4e81-b4cf-6e7a9911ae35 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783498664 +0000 UTC m=+7.858104112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert") pod "route-controller-manager-8467b998d8-l9fvg" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.283624 master-0 kubenswrapper[31456]: E0312 21:09:06.283525 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.783518464 +0000 UTC m=+7.858123802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.284556 master-0 kubenswrapper[31456]: E0312 21:09:06.284530 31456 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.284611 master-0 kubenswrapper[31456]: E0312 21:09:06.284580 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config podName:7667a111-e744-47b2-9603-3864347dc738 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.78456734 +0000 UTC m=+7.859172678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config") pod "node-exporter-lkmd7" (UID: "7667a111-e744-47b2-9603-3864347dc738") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.284611 master-0 kubenswrapper[31456]: E0312 21:09:06.284592 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.284687 master-0 kubenswrapper[31456]: E0312 21:09:06.284674 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap podName:4ebc9ee1-3913-4112-bb3f-c79f2c08032b nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.784661962 +0000 UTC m=+7.859267300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-4tfmr" (UID: "4ebc9ee1-3913-4112-bb3f-c79f2c08032b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.286846 master-0 kubenswrapper[31456]: E0312 21:09:06.286786 31456 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.286846 master-0 kubenswrapper[31456]: E0312 21:09:06.286823 31456 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.286955 master-0 kubenswrapper[31456]: E0312 21:09:06.286857 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config podName:90f0e4da-71d4-4c4e-a2fc-9ef588daaf72 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.786845005 +0000 UTC m=+7.861450353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-c7jz8" (UID: "90f0e4da-71d4-4c4e-a2fc-9ef588daaf72") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.286955 master-0 kubenswrapper[31456]: E0312 21:09:06.286862 31456 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.286955 master-0 kubenswrapper[31456]: E0312 21:09:06.286877 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls podName:90f0e4da-71d4-4c4e-a2fc-9ef588daaf72 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.786868686 +0000 UTC m=+7.861474034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls") pod "machine-config-controller-ff46b7bdf-c7jz8" (UID: "90f0e4da-71d4-4c4e-a2fc-9ef588daaf72") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.286955 master-0 kubenswrapper[31456]: E0312 21:09:06.286898 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls podName:67e68ff0-f54d-4973-bbe7-ed43ce542bc0 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.786889096 +0000 UTC m=+7.861494434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-sh67s" (UID: "67e68ff0-f54d-4973-bbe7-ed43ce542bc0") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287414 master-0 kubenswrapper[31456]: E0312 21:09:06.287378 31456 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.287463 master-0 kubenswrapper[31456]: E0312 21:09:06.287429 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca podName:05fd1378-3935-4caf-96c5-17cf7e29417f nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787419428 +0000 UTC m=+7.862024766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-j79ht" (UID: "05fd1378-3935-4caf-96c5-17cf7e29417f") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.287463 master-0 kubenswrapper[31456]: E0312 21:09:06.287453 31456 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287545 master-0 kubenswrapper[31456]: E0312 21:09:06.287481 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls podName:7667a111-e744-47b2-9603-3864347dc738 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.78747448 +0000 UTC m=+7.862079818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls") pod "node-exporter-lkmd7" (UID: "7667a111-e744-47b2-9603-3864347dc738") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287545 master-0 kubenswrapper[31456]: E0312 21:09:06.287498 31456 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287545 master-0 kubenswrapper[31456]: E0312 21:09:06.287523 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls podName:ed1c4da2-564b-4354-a4ec-27b801098aa5 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787517041 +0000 UTC m=+7.862122379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-bdmlf" (UID: "ed1c4da2-564b-4354-a4ec-27b801098aa5") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287545 master-0 kubenswrapper[31456]: E0312 21:09:06.287539 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287576 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787565762 +0000 UTC m=+7.862171110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287604 31456 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287630 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert podName:17d2bb40-74e2-4894-a884-7018952bdf71 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787623043 +0000 UTC m=+7.862228381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert") pod "cluster-baremetal-operator-5cdb4c5598-fnxjc" (UID: "17d2bb40-74e2-4894-a884-7018952bdf71") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287647 31456 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287674 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls podName:d9152bd6-f203-469b-97fa-db274e43b40c nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787667184 +0000 UTC m=+7.862272522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls") pod "machine-config-daemon-n5wh9" (UID: "d9152bd6-f203-469b-97fa-db274e43b40c") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287687 31456 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287707 31456 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.287726 master-0 kubenswrapper[31456]: E0312 21:09:06.287725 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787711585 +0000 UTC m=+7.862317033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287734 31456 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287743 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle podName:a5d1e064-c12b-4c1d-b499-4e301ca8a8dc nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787736016 +0000 UTC m=+7.862341354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle") pod "insights-operator-8f89dfddd-lc7jk" (UID: "a5d1e064-c12b-4c1d-b499-4e301ca8a8dc") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287761 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config podName:d9152bd6-f203-469b-97fa-db274e43b40c nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787753296 +0000 UTC m=+7.862358644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config") pod "machine-config-daemon-n5wh9" (UID: "d9152bd6-f203-469b-97fa-db274e43b40c") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287771 31456 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287788 31456 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287800 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs podName:b8aa8296-ed9b-4b37-8ab4-791b1342140f nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787792877 +0000 UTC m=+7.862398215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs") pod "multus-admission-controller-7769569c45-tgbjx" (UID: "b8aa8296-ed9b-4b37-8ab4-791b1342140f") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287847 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787837008 +0000 UTC m=+7.862442456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287862 31456 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287883 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287894 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images podName:17d2bb40-74e2-4894-a884-7018952bdf71 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787886739 +0000 UTC m=+7.862492077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images") pod "cluster-baremetal-operator-5cdb4c5598-fnxjc" (UID: "17d2bb40-74e2-4894-a884-7018952bdf71") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287913 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca podName:7667a111-e744-47b2-9603-3864347dc738 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.7879044 +0000 UTC m=+7.862509738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca") pod "node-exporter-lkmd7" (UID: "7667a111-e744-47b2-9603-3864347dc738") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287923 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287931 31456 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287951 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca podName:4ebc9ee1-3913-4112-bb3f-c79f2c08032b nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787944821 +0000 UTC m=+7.862550159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-4tfmr" (UID: "4ebc9ee1-3913-4112-bb3f-c79f2c08032b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288074 master-0 kubenswrapper[31456]: E0312 21:09:06.287971 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert podName:a5d1e064-c12b-4c1d-b499-4e301ca8a8dc nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.787961922 +0000 UTC m=+7.862567260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert") pod "insights-operator-8f89dfddd-lc7jk" (UID: "a5d1e064-c12b-4c1d-b499-4e301ca8a8dc") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.288622 master-0 kubenswrapper[31456]: E0312 21:09:06.288127 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288622 master-0 kubenswrapper[31456]: E0312 21:09:06.288160 31456 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.288622 master-0 kubenswrapper[31456]: E0312 21:09:06.288171 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.788160907 +0000 UTC m=+7.862766245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.288622 master-0 kubenswrapper[31456]: E0312 21:09:06.288200 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs podName:a5d6705e-e564-4774-94b4-ef11956c67b2 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.788190488 +0000 UTC m=+7.862795906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs") pod "machine-config-server-mz2sr" (UID: "a5d6705e-e564-4774-94b4-ef11956c67b2") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.289096 master-0 kubenswrapper[31456]: I0312 21:09:06.289069 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 21:09:06.289544 master-0 kubenswrapper[31456]: E0312 21:09:06.289278 31456 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.289544 master-0 kubenswrapper[31456]: E0312 21:09:06.289295 31456 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.289544 master-0 kubenswrapper[31456]: E0312 21:09:06.289337 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images podName:508cb83e-6f25-4235-8c56-b25b762ebcad nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.789320254 +0000 UTC m=+7.863925642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images") pod "machine-config-operator-fdb5c78b5-7p8w8" (UID: "508cb83e-6f25-4235-8c56-b25b762ebcad") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.289544 master-0 kubenswrapper[31456]: E0312 21:09:06.289360 31456 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.289544 master-0 kubenswrapper[31456]: E0312 21:09:06.289366 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls podName:ea339fe1-c013-4c4b-90c9-aaaa7eb40d99 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.789353906 +0000 UTC m=+7.863959354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-8fpdl" (UID: "ea339fe1-c013-4c4b-90c9-aaaa7eb40d99") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:06.289544 master-0 kubenswrapper[31456]: E0312 21:09:06.289421 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config podName:f8467055-c9c9-4485-bb60-9a79e8b91268 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:06.789406157 +0000 UTC m=+7.864011555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" (UID: "f8467055-c9c9-4485-bb60-9a79e8b91268") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:06.309435 master-0 kubenswrapper[31456]: I0312 21:09:06.309391 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 21:09:06.330022 master-0 kubenswrapper[31456]: I0312 21:09:06.329969 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 21:09:06.349498 master-0 kubenswrapper[31456]: I0312 21:09:06.349450 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-n68ff" Mar 12 21:09:06.369171 master-0 kubenswrapper[31456]: I0312 21:09:06.369111 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 21:09:06.397298 master-0 kubenswrapper[31456]: I0312 21:09:06.390677 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 21:09:06.423848 master-0 kubenswrapper[31456]: I0312 21:09:06.423207 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 21:09:06.432437 master-0 kubenswrapper[31456]: I0312 21:09:06.432366 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 21:09:06.459685 master-0 kubenswrapper[31456]: I0312 21:09:06.459616 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 21:09:06.469530 master-0 kubenswrapper[31456]: I0312 21:09:06.469484 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 21:09:06.482769 master-0 kubenswrapper[31456]: I0312 21:09:06.482408 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:06.488926 master-0 kubenswrapper[31456]: I0312 21:09:06.488889 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 21:09:06.509288 master-0 kubenswrapper[31456]: I0312 21:09:06.509247 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 21:09:06.529471 master-0 kubenswrapper[31456]: I0312 21:09:06.529410 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-62zgv" Mar 12 21:09:06.551620 master-0 kubenswrapper[31456]: I0312 21:09:06.551558 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 21:09:06.569470 master-0 kubenswrapper[31456]: I0312 21:09:06.569425 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 21:09:06.589643 master-0 kubenswrapper[31456]: I0312 21:09:06.589581 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-w9pdx" Mar 12 21:09:06.609117 master-0 kubenswrapper[31456]: I0312 21:09:06.609053 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 21:09:06.629604 master-0 kubenswrapper[31456]: I0312 21:09:06.629548 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vmm2r" Mar 12 21:09:06.650019 master-0 kubenswrapper[31456]: I0312 21:09:06.649960 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pvnjq" Mar 12 21:09:06.669654 master-0 kubenswrapper[31456]: I0312 21:09:06.669585 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 21:09:06.689004 master-0 kubenswrapper[31456]: I0312 21:09:06.688878 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-bxh97" Mar 12 21:09:06.709202 master-0 kubenswrapper[31456]: I0312 21:09:06.709125 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 21:09:06.730106 master-0 kubenswrapper[31456]: I0312 21:09:06.730038 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-9n54f" Mar 12 21:09:06.749161 master-0 kubenswrapper[31456]: I0312 21:09:06.749101 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 21:09:06.769241 master-0 kubenswrapper[31456]: I0312 21:09:06.769196 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 21:09:06.789001 master-0 kubenswrapper[31456]: I0312 21:09:06.788967 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 21:09:06.809413 master-0 kubenswrapper[31456]: I0312 21:09:06.809357 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 21:09:06.836927 master-0 kubenswrapper[31456]: I0312 21:09:06.836869 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 21:09:06.840870 master-0 kubenswrapper[31456]: I0312 21:09:06.840803 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:06.841155 master-0 kubenswrapper[31456]: I0312 21:09:06.841130 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:06.841321 master-0 kubenswrapper[31456]: I0312 21:09:06.841295 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:06.841504 master-0 kubenswrapper[31456]: I0312 21:09:06.841479 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:06.841682 master-0 kubenswrapper[31456]: I0312 21:09:06.841637 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:06.841966 master-0 kubenswrapper[31456]: I0312 21:09:06.841645 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b71376ea-e248-48fc-b2c4-1de7236ddd31-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:06.842163 master-0 kubenswrapper[31456]: I0312 21:09:06.842112 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 21:09:06.842404 master-0 kubenswrapper[31456]: I0312 21:09:06.842363 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:06.842598 master-0 kubenswrapper[31456]: I0312 21:09:06.842543 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:06.842870 master-0 kubenswrapper[31456]: I0312 21:09:06.842756 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 21:09:06.843163 master-0 kubenswrapper[31456]: I0312 21:09:06.843121 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.843345 master-0 kubenswrapper[31456]: I0312 21:09:06.843312 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:06.843543 master-0 kubenswrapper[31456]: I0312 21:09:06.843514 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:06.843732 master-0 kubenswrapper[31456]: I0312 21:09:06.843216 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:06.843838 master-0 kubenswrapper[31456]: I0312 21:09:06.843702 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:06.843838 master-0 kubenswrapper[31456]: I0312 21:09:06.843789 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:06.843974 master-0 kubenswrapper[31456]: I0312 21:09:06.843850 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:06.843974 master-0 kubenswrapper[31456]: I0312 21:09:06.843872 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:06.843974 master-0 kubenswrapper[31456]: I0312 21:09:06.843900 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:06.843974 master-0 kubenswrapper[31456]: I0312 21:09:06.843928 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:06.844205 master-0 kubenswrapper[31456]: I0312 21:09:06.844033 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:06.844205 master-0 kubenswrapper[31456]: I0312 21:09:06.844089 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:06.844205 master-0 kubenswrapper[31456]: I0312 21:09:06.844122 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:06.844391 master-0 kubenswrapper[31456]: I0312 21:09:06.844353 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:06.844555 master-0 kubenswrapper[31456]: I0312 21:09:06.844516 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:06.844555 master-0 kubenswrapper[31456]: I0312 21:09:06.844379 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:06.844713 master-0 kubenswrapper[31456]: I0312 21:09:06.844613 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:06.844713 master-0 kubenswrapper[31456]: I0312 21:09:06.844677 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:06.844891 master-0 kubenswrapper[31456]: I0312 21:09:06.844718 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:06.844891 master-0 kubenswrapper[31456]: I0312 21:09:06.844707 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/508cb83e-6f25-4235-8c56-b25b762ebcad-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:06.850098 master-0 kubenswrapper[31456]: I0312 21:09:06.850033 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:06.850300 master-0 kubenswrapper[31456]: I0312 21:09:06.850261 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:06.850383 master-0 kubenswrapper[31456]: I0312 21:09:06.850328 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:09:06.850453 master-0 kubenswrapper[31456]: I0312 21:09:06.850379 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:06.850453 master-0 kubenswrapper[31456]: I0312 21:09:06.850438 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:06.850562 master-0 kubenswrapper[31456]: I0312 21:09:06.850534 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:06.850621 master-0 kubenswrapper[31456]: I0312 21:09:06.850597 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:06.850685 master-0 kubenswrapper[31456]: I0312 21:09:06.850652 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:09:06.850788 master-0 kubenswrapper[31456]: I0312 21:09:06.850755 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:06.850892 master-0 kubenswrapper[31456]: I0312 21:09:06.850845 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:06.850985 master-0 kubenswrapper[31456]: I0312 21:09:06.850954 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:06.851115 master-0 kubenswrapper[31456]: I0312 21:09:06.851084 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:06.851180 master-0 kubenswrapper[31456]: I0312 21:09:06.851147 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:06.851241 master-0 kubenswrapper[31456]: I0312 21:09:06.851202 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:06.851430 master-0 kubenswrapper[31456]: I0312 21:09:06.851399 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:06.851493 master-0 kubenswrapper[31456]: I0312 21:09:06.851467 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:06.851557 master-0 kubenswrapper[31456]: I0312 21:09:06.851524 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:06.851616 master-0 kubenswrapper[31456]: I0312 21:09:06.851572 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.851673 master-0 kubenswrapper[31456]: I0312 21:09:06.851631 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:06.851748 master-0 kubenswrapper[31456]: I0312 21:09:06.851717 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:06.851917 master-0 kubenswrapper[31456]: I0312 21:09:06.851885 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:06.851993 master-0 kubenswrapper[31456]: I0312 21:09:06.851950 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:06.852095 master-0 kubenswrapper[31456]: I0312 21:09:06.852061 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:06.852163 master-0 kubenswrapper[31456]: I0312 21:09:06.852123 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.852220 master-0 kubenswrapper[31456]: I0312 21:09:06.852184 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:06.852477 master-0 kubenswrapper[31456]: I0312 21:09:06.852443 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:06.852599 master-0 kubenswrapper[31456]: I0312 21:09:06.852563 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmtk\" (UID: \"90f16d8c-25b6-4827-85d9-0995e4e1ab15\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 21:09:06.852678 master-0 kubenswrapper[31456]: I0312 21:09:06.852622 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:06.853886 master-0 kubenswrapper[31456]: I0312 21:09:06.853843 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:06.853984 master-0 kubenswrapper[31456]: I0312 21:09:06.853967 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.854045 master-0 kubenswrapper[31456]: I0312 21:09:06.854024 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:06.854105 master-0 kubenswrapper[31456]: I0312 21:09:06.854075 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:06.854158 master-0 kubenswrapper[31456]: I0312 21:09:06.854122 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:06.854217 master-0 kubenswrapper[31456]: I0312 21:09:06.854176 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:06.859385 master-0 kubenswrapper[31456]: I0312 21:09:06.859329 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-bk87n" Mar 12 21:09:06.861036 master-0 kubenswrapper[31456]: I0312 21:09:06.860961 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.861149 master-0 kubenswrapper[31456]: I0312 21:09:06.860965 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/90f16d8c-25b6-4827-85d9-0995e4e1ab15-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmtk\" (UID: \"90f16d8c-25b6-4827-85d9-0995e4e1ab15\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 21:09:06.861462 master-0 kubenswrapper[31456]: I0312 21:09:06.861430 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/17d2bb40-74e2-4894-a884-7018952bdf71-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.861543 master-0 kubenswrapper[31456]: I0312 21:09:06.861459 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b71376ea-e248-48fc-b2c4-1de7236ddd31-cert\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:06.861543 master-0 kubenswrapper[31456]: I0312 21:09:06.860961 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-images\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:06.861650 master-0 kubenswrapper[31456]: I0312 21:09:06.860964 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9152bd6-f203-469b-97fa-db274e43b40c-mcd-auth-proxy-config\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:06.861717 master-0 kubenswrapper[31456]: I0312 21:09:06.860954 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-images\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:06.861717 master-0 kubenswrapper[31456]: I0312 21:09:06.861669 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-config\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:06.861918 master-0 kubenswrapper[31456]: I0312 21:09:06.861874 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/05fd1378-3935-4caf-96c5-17cf7e29417f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:06.862440 master-0 kubenswrapper[31456]: I0312 21:09:06.862391 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/508cb83e-6f25-4235-8c56-b25b762ebcad-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:06.862649 master-0 kubenswrapper[31456]: I0312 21:09:06.862389 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-serving-cert\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:06.863921 master-0 kubenswrapper[31456]: I0312 21:09:06.863883 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:06.864013 master-0 kubenswrapper[31456]: I0312 21:09:06.863954 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fd1378-3935-4caf-96c5-17cf7e29417f-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:06.868783 master-0 kubenswrapper[31456]: I0312 21:09:06.868752 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 21:09:06.889136 master-0 kubenswrapper[31456]: I0312 21:09:06.889073 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 21:09:06.909052 master-0 kubenswrapper[31456]: I0312 21:09:06.908992 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 21:09:06.914131 master-0 kubenswrapper[31456]: I0312 21:09:06.914088 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/32050f14-1939-41bf-a824-22016b90c189-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 21:09:06.929428 master-0 kubenswrapper[31456]: I0312 21:09:06.929400 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-7t6bk" Mar 12 21:09:06.949319 master-0 kubenswrapper[31456]: I0312 21:09:06.949240 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 21:09:06.969262 master-0 kubenswrapper[31456]: I0312 21:09:06.969229 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 21:09:06.975245 master-0 kubenswrapper[31456]: I0312 21:09:06.975211 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-images\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:06.989361 master-0 kubenswrapper[31456]: I0312 21:09:06.989284 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 21:09:06.993464 master-0 kubenswrapper[31456]: I0312 21:09:06.993414 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3afe47-c537-420c-b5be-1cad612e119d-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 21:09:07.009793 master-0 kubenswrapper[31456]: I0312 21:09:07.009716 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-xjkth" Mar 12 21:09:07.029849 master-0 kubenswrapper[31456]: I0312 21:09:07.029779 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 21:09:07.035746 master-0 kubenswrapper[31456]: I0312 21:09:07.035710 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:07.049437 master-0 kubenswrapper[31456]: I0312 21:09:07.049398 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 21:09:07.055779 master-0 kubenswrapper[31456]: I0312 21:09:07.055739 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400a13b5-c489-4beb-af33-94e635b86148-config\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:07.069532 master-0 kubenswrapper[31456]: I0312 21:09:07.069476 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5j2qf" Mar 12 21:09:07.089511 master-0 kubenswrapper[31456]: I0312 21:09:07.089458 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 21:09:07.109727 master-0 kubenswrapper[31456]: I0312 21:09:07.109687 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 21:09:07.111080 master-0 kubenswrapper[31456]: I0312 21:09:07.111058 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400a13b5-c489-4beb-af33-94e635b86148-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:07.129419 master-0 kubenswrapper[31456]: I0312 21:09:07.129401 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 21:09:07.149408 master-0 kubenswrapper[31456]: I0312 21:09:07.149348 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 21:09:07.170026 master-0 kubenswrapper[31456]: I0312 21:09:07.169963 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 21:09:07.179246 master-0 kubenswrapper[31456]: I0312 21:09:07.179176 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:07.189106 master-0 kubenswrapper[31456]: I0312 21:09:07.189063 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 21:09:07.191485 master-0 kubenswrapper[31456]: I0312 21:09:07.191404 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8467055-c9c9-4485-bb60-9a79e8b91268-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:07.208960 master-0 kubenswrapper[31456]: I0312 21:09:07.208878 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 21:09:07.211719 master-0 kubenswrapper[31456]: I0312 21:09:07.211699 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f8467055-c9c9-4485-bb60-9a79e8b91268-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:07.227875 master-0 kubenswrapper[31456]: I0312 21:09:07.227848 31456 request.go:700] Waited for 2.010354213s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 12 21:09:07.230039 master-0 kubenswrapper[31456]: I0312 21:09:07.230000 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:09:07.249473 master-0 kubenswrapper[31456]: I0312 21:09:07.249407 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 21:09:07.252910 master-0 kubenswrapper[31456]: I0312 21:09:07.252858 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d9152bd6-f203-469b-97fa-db274e43b40c-proxy-tls\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:07.270898 master-0 kubenswrapper[31456]: I0312 21:09:07.270862 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-h7jv4" Mar 12 21:09:07.288823 master-0 kubenswrapper[31456]: I0312 21:09:07.288768 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 21:09:07.309996 master-0 kubenswrapper[31456]: I0312 21:09:07.309951 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-r4pnh" Mar 12 21:09:07.330148 master-0 kubenswrapper[31456]: I0312 21:09:07.330116 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-lrwqt" Mar 12 21:09:07.350169 master-0 kubenswrapper[31456]: I0312 21:09:07.350142 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 21:09:07.355079 master-0 kubenswrapper[31456]: I0312 21:09:07.355035 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:07.370013 master-0 kubenswrapper[31456]: I0312 21:09:07.369952 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-ct6dn" Mar 12 21:09:07.390644 master-0 kubenswrapper[31456]: I0312 21:09:07.390600 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 21:09:07.392479 master-0 kubenswrapper[31456]: I0312 21:09:07.392443 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1c4da2-564b-4354-a4ec-27b801098aa5-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:07.393907 master-0 kubenswrapper[31456]: I0312 21:09:07.393854 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:07.398532 master-0 kubenswrapper[31456]: I0312 21:09:07.398490 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:07.398629 master-0 kubenswrapper[31456]: I0312 21:09:07.398600 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7667a111-e744-47b2-9603-3864347dc738-metrics-client-ca\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:07.409947 master-0 kubenswrapper[31456]: I0312 21:09:07.409927 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 21:09:07.418948 master-0 kubenswrapper[31456]: I0312 21:09:07.418903 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:07.429920 master-0 kubenswrapper[31456]: I0312 21:09:07.429788 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 21:09:07.435298 master-0 kubenswrapper[31456]: I0312 21:09:07.435252 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:07.449698 master-0 kubenswrapper[31456]: I0312 21:09:07.449670 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 21:09:07.452175 master-0 kubenswrapper[31456]: I0312 21:09:07.451983 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-node-bootstrap-token\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:07.469490 master-0 kubenswrapper[31456]: I0312 21:09:07.469409 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-rgtlp" Mar 12 21:09:07.489212 master-0 kubenswrapper[31456]: I0312 21:09:07.489175 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 21:09:07.499607 master-0 kubenswrapper[31456]: I0312 21:09:07.499553 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a5d6705e-e564-4774-94b4-ef11956c67b2-certs\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:07.508940 master-0 kubenswrapper[31456]: I0312 21:09:07.508901 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xgssr" Mar 12 21:09:07.530315 master-0 kubenswrapper[31456]: I0312 21:09:07.530249 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 21:09:07.531438 master-0 kubenswrapper[31456]: I0312 21:09:07.531416 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-tls\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:07.548994 master-0 kubenswrapper[31456]: I0312 21:09:07.548943 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 21:09:07.555287 master-0 kubenswrapper[31456]: I0312 21:09:07.555243 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7667a111-e744-47b2-9603-3864347dc738-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:07.569433 master-0 kubenswrapper[31456]: I0312 21:09:07.569399 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 21:09:07.578076 master-0 kubenswrapper[31456]: I0312 21:09:07.578027 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:07.589425 master-0 kubenswrapper[31456]: I0312 21:09:07.589383 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vr86d" Mar 12 21:09:07.610916 master-0 kubenswrapper[31456]: I0312 21:09:07.610863 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 21:09:07.613072 master-0 kubenswrapper[31456]: I0312 21:09:07.613017 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:07.629578 master-0 kubenswrapper[31456]: I0312 21:09:07.629526 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 21:09:07.634559 master-0 kubenswrapper[31456]: I0312 21:09:07.634485 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:07.680912 master-0 kubenswrapper[31456]: I0312 21:09:07.680860 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 21:09:07.689607 master-0 kubenswrapper[31456]: I0312 21:09:07.689528 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:07.690351 master-0 kubenswrapper[31456]: I0312 21:09:07.690296 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mc5vw" Mar 12 21:09:07.709722 master-0 kubenswrapper[31456]: I0312 21:09:07.709658 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 21:09:07.719032 master-0 kubenswrapper[31456]: I0312 21:09:07.718983 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed1c4da2-564b-4354-a4ec-27b801098aa5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:07.730339 master-0 kubenswrapper[31456]: I0312 21:09:07.730211 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 21:09:07.750296 master-0 kubenswrapper[31456]: I0312 21:09:07.750221 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 21:09:07.769908 master-0 kubenswrapper[31456]: I0312 21:09:07.769860 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 21:09:07.775848 master-0 kubenswrapper[31456]: I0312 21:09:07.775684 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-cert\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:09:07.789528 master-0 kubenswrapper[31456]: I0312 21:09:07.789445 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-zfxcx" Mar 12 21:09:07.809887 master-0 kubenswrapper[31456]: I0312 21:09:07.809769 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-kj7kz" Mar 12 21:09:07.829742 master-0 kubenswrapper[31456]: I0312 21:09:07.829674 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 21:09:07.831873 master-0 kubenswrapper[31456]: I0312 21:09:07.831798 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:09:07.844175 master-0 kubenswrapper[31456]: E0312 21:09:07.844121 31456 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-4jamj9cd05on6: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.844329 master-0 kubenswrapper[31456]: E0312 21:09:07.844205 31456 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.844329 master-0 kubenswrapper[31456]: E0312 21:09:07.844287 31456 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.844329 master-0 kubenswrapper[31456]: E0312 21:09:07.844215 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.844184252 +0000 UTC m=+9.918789620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.844585 master-0 kubenswrapper[31456]: E0312 21:09:07.844235 31456 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.844585 master-0 kubenswrapper[31456]: E0312 21:09:07.844358 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.844341885 +0000 UTC m=+9.918947253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.844585 master-0 kubenswrapper[31456]: E0312 21:09:07.844372 31456 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.844585 master-0 kubenswrapper[31456]: E0312 21:09:07.844517 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config podName:17d2bb40-74e2-4894-a884-7018952bdf71 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.844479369 +0000 UTC m=+9.919084727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config") pod "cluster-baremetal-operator-5cdb4c5598-fnxjc" (UID: "17d2bb40-74e2-4894-a884-7018952bdf71") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.844585 master-0 kubenswrapper[31456]: E0312 21:09:07.844553 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert podName:d850d441-7505-4e81-b4cf-6e7a9911ae35 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.84454063 +0000 UTC m=+9.919145998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert") pod "route-controller-manager-8467b998d8-l9fvg" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.844585 master-0 kubenswrapper[31456]: E0312 21:09:07.844577 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca podName:d850d441-7505-4e81-b4cf-6e7a9911ae35 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.844566741 +0000 UTC m=+9.919172099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca") pod "route-controller-manager-8467b998d8-l9fvg" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.849895 master-0 kubenswrapper[31456]: I0312 21:09:07.849844 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 21:09:07.850050 master-0 kubenswrapper[31456]: E0312 21:09:07.849996 31456 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.850130 master-0 kubenswrapper[31456]: E0312 21:09:07.850089 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.850062704 +0000 UTC m=+9.924668062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.855058 master-0 kubenswrapper[31456]: E0312 21:09:07.854932 31456 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.855210 master-0 kubenswrapper[31456]: E0312 21:09:07.855162 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config podName:d850d441-7505-4e81-b4cf-6e7a9911ae35 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.855138387 +0000 UTC m=+9.929743755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config") pod "route-controller-manager-8467b998d8-l9fvg" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.858099 master-0 kubenswrapper[31456]: E0312 21:09:07.858048 31456 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.858286 master-0 kubenswrapper[31456]: E0312 21:09:07.858099 31456 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.858286 master-0 kubenswrapper[31456]: E0312 21:09:07.858125 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.858104908 +0000 UTC m=+9.932710276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.858286 master-0 kubenswrapper[31456]: E0312 21:09:07.858169 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.858151379 +0000 UTC m=+9.932756737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.858286 master-0 kubenswrapper[31456]: E0312 21:09:07.858198 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.858682 master-0 kubenswrapper[31456]: E0312 21:09:07.858324 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.858303583 +0000 UTC m=+9.932908951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.859401 master-0 kubenswrapper[31456]: E0312 21:09:07.859330 31456 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.859488 master-0 kubenswrapper[31456]: E0312 21:09:07.859462 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert podName:b50a6106-1112-4a4b-b4ae-933879e12110 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.85943014 +0000 UTC m=+9.934035498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert") pod "controller-manager-759579d7c9-wjl25" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.862211 master-0 kubenswrapper[31456]: E0312 21:09:07.862160 31456 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.862368 master-0 kubenswrapper[31456]: E0312 21:09:07.862252 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.862232838 +0000 UTC m=+9.936838196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync secret cache: timed out waiting for the condition Mar 12 21:09:07.862720 master-0 kubenswrapper[31456]: E0312 21:09:07.862668 31456 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.862861 master-0 kubenswrapper[31456]: E0312 21:09:07.862771 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle podName:33beea0b-f77b-4388-a9c8-5710f084f961 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:08.86274515 +0000 UTC m=+9.937350518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle") pod "metrics-server-5bbfd655db-2tsb8" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961") : failed to sync configmap cache: timed out waiting for the condition Mar 12 21:09:07.869739 master-0 kubenswrapper[31456]: I0312 21:09:07.869639 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 21:09:07.890112 master-0 kubenswrapper[31456]: I0312 21:09:07.890011 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 21:09:07.909258 master-0 kubenswrapper[31456]: I0312 21:09:07.909182 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-p5qt4" Mar 12 21:09:07.929763 master-0 kubenswrapper[31456]: I0312 21:09:07.929665 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4jamj9cd05on6" Mar 12 21:09:07.949837 master-0 kubenswrapper[31456]: I0312 21:09:07.949751 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 21:09:07.970181 master-0 kubenswrapper[31456]: I0312 21:09:07.970106 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-7gthf" Mar 12 21:09:07.990975 master-0 kubenswrapper[31456]: I0312 21:09:07.990845 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 21:09:08.009426 master-0 kubenswrapper[31456]: I0312 21:09:08.009331 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 21:09:08.031009 master-0 kubenswrapper[31456]: I0312 21:09:08.030905 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 21:09:08.049867 master-0 kubenswrapper[31456]: I0312 21:09:08.049770 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 21:09:08.069746 master-0 kubenswrapper[31456]: I0312 21:09:08.069660 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 21:09:08.090114 master-0 kubenswrapper[31456]: I0312 21:09:08.090021 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-f29rj" Mar 12 21:09:08.110137 master-0 kubenswrapper[31456]: I0312 21:09:08.110073 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 21:09:08.130129 master-0 kubenswrapper[31456]: I0312 21:09:08.130040 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 21:09:08.150179 master-0 kubenswrapper[31456]: I0312 21:09:08.150064 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 21:09:08.179911 master-0 kubenswrapper[31456]: I0312 21:09:08.179786 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 21:09:08.190234 master-0 kubenswrapper[31456]: I0312 21:09:08.190170 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 21:09:08.209266 master-0 kubenswrapper[31456]: I0312 21:09:08.209179 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 21:09:08.228587 master-0 kubenswrapper[31456]: I0312 21:09:08.228493 31456 request.go:700] Waited for 3.005521046s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0 Mar 12 21:09:08.230487 master-0 kubenswrapper[31456]: I0312 21:09:08.230422 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 21:09:08.264048 master-0 kubenswrapper[31456]: E0312 21:09:08.262067 31456 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.038s" Mar 12 21:09:08.264048 master-0 kubenswrapper[31456]: I0312 21:09:08.262124 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 21:09:08.264048 master-0 kubenswrapper[31456]: I0312 21:09:08.262143 31456 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="33cdd0bf-9c54-42b1-a5a4-7c5725708df2" Mar 12 21:09:08.264048 master-0 kubenswrapper[31456]: I0312 21:09:08.262173 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:08.274470 master-0 kubenswrapper[31456]: I0312 21:09:08.274396 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 21:09:08.295981 master-0 kubenswrapper[31456]: I0312 21:09:08.295898 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/784599a3-a2ac-46ac-a4b7-9439704646cc-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-56nzk\" (UID: \"784599a3-a2ac-46ac-a4b7-9439704646cc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-56nzk" Mar 12 21:09:08.313287 master-0 kubenswrapper[31456]: I0312 21:09:08.313233 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kng9\" (UniqueName: \"kubernetes.io/projected/fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6-kube-api-access-2kng9\") pod \"network-operator-7c649bf6d4-62t2f\" (UID: \"fa5ff8e4-1c0f-4f0d-a2c4-1ad7649524c6\") " pod="openshift-network-operator/network-operator-7c649bf6d4-62t2f" Mar 12 21:09:08.335830 master-0 kubenswrapper[31456]: I0312 21:09:08.335738 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rjm8\" (UniqueName: \"kubernetes.io/projected/426efd5c-69e1-43e5-835a-6e1c4ef85720-kube-api-access-8rjm8\") pod \"network-node-identity-48hk7\" (UID: \"426efd5c-69e1-43e5-835a-6e1c4ef85720\") " pod="openshift-network-node-identity/network-node-identity-48hk7" Mar 12 21:09:08.352012 master-0 kubenswrapper[31456]: I0312 21:09:08.351964 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcxd\" (UniqueName: \"kubernetes.io/projected/36bd483b-292e-4e82-99d6-daa612cd385a-kube-api-access-fmcxd\") pod \"apiserver-7946996f87-nzb7c\" (UID: \"36bd483b-292e-4e82-99d6-daa612cd385a\") " pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:08.372124 master-0 kubenswrapper[31456]: I0312 21:09:08.372051 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvxh\" (UniqueName: \"kubernetes.io/projected/a5d6705e-e564-4774-94b4-ef11956c67b2-kube-api-access-dkvxh\") pod \"machine-config-server-mz2sr\" (UID: \"a5d6705e-e564-4774-94b4-ef11956c67b2\") " pod="openshift-machine-config-operator/machine-config-server-mz2sr" Mar 12 21:09:08.394088 master-0 kubenswrapper[31456]: I0312 21:09:08.394023 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96bd86df-2101-47f5-844b-1332261c66f1-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-f2kg4\" (UID: \"96bd86df-2101-47f5-844b-1332261c66f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-f2kg4" Mar 12 21:09:08.414850 master-0 kubenswrapper[31456]: I0312 21:09:08.414771 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2r2r\" (UniqueName: \"kubernetes.io/projected/617f0f9c-50d5-4214-b30f-5110fd4399ec-kube-api-access-f2r2r\") pod \"iptables-alerter-krpjj\" (UID: \"617f0f9c-50d5-4214-b30f-5110fd4399ec\") " pod="openshift-network-operator/iptables-alerter-krpjj" Mar 12 21:09:08.433821 master-0 kubenswrapper[31456]: I0312 21:09:08.433774 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfn6\" (UniqueName: \"kubernetes.io/projected/90f0e4da-71d4-4c4e-a2fc-9ef588daaf72-kube-api-access-2rfn6\") pod \"machine-config-controller-ff46b7bdf-c7jz8\" (UID: \"90f0e4da-71d4-4c4e-a2fc-9ef588daaf72\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-c7jz8" Mar 12 21:09:08.452522 master-0 kubenswrapper[31456]: I0312 21:09:08.452457 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbc5\" (UniqueName: \"kubernetes.io/projected/15ebfbd8-0782-431a-88a3-83af328498d2-kube-api-access-mbbc5\") pod \"openshift-apiserver-operator-799b6db4d7-jwthf\" (UID: \"15ebfbd8-0782-431a-88a3-83af328498d2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-jwthf" Mar 12 21:09:08.471851 master-0 kubenswrapper[31456]: I0312 21:09:08.471750 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577p4\" (UniqueName: \"kubernetes.io/projected/a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d-kube-api-access-577p4\") pod \"service-ca-operator-69b6fc6b88-f62j6\" (UID: \"a1a3a3f9-8d60-4b79-9f72-b1defbf4ee4d\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-f62j6" Mar 12 21:09:08.491686 master-0 kubenswrapper[31456]: I0312 21:09:08.491603 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhhz\" (UniqueName: \"kubernetes.io/projected/70baf3e2-83bb-4156-afb3-30ca8e3d1d9d-kube-api-access-qqhhz\") pod \"apiserver-84fb785f4-kl52q\" (UID: \"70baf3e2-83bb-4156-afb3-30ca8e3d1d9d\") " pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:08.515671 master-0 kubenswrapper[31456]: I0312 21:09:08.515542 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzn6t\" (UniqueName: \"kubernetes.io/projected/567a9a33-1a82-4c48-b541-7e0eaae11f57-kube-api-access-nzn6t\") pod \"community-operators-jblsg\" (UID: \"567a9a33-1a82-4c48-b541-7e0eaae11f57\") " pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:08.531954 master-0 kubenswrapper[31456]: I0312 21:09:08.531897 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:08.551786 master-0 kubenswrapper[31456]: I0312 21:09:08.551634 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfsvw\" (UniqueName: \"kubernetes.io/projected/70e54b24-bf9d-42a8-b012-c7b073c6f6a6-kube-api-access-mfsvw\") pod \"multus-gnmmm\" (UID: \"70e54b24-bf9d-42a8-b012-c7b073c6f6a6\") " pod="openshift-multus/multus-gnmmm" Mar 12 21:09:08.572891 master-0 kubenswrapper[31456]: I0312 21:09:08.572784 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78vj\" (UniqueName: \"kubernetes.io/projected/7623a5c6-47a9-4b75-bda8-c0a2d7c67272-kube-api-access-q78vj\") pod \"openshift-controller-manager-operator-8565d84698-vp2hs\" (UID: \"7623a5c6-47a9-4b75-bda8-c0a2d7c67272\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-vp2hs" Mar 12 21:09:08.595886 master-0 kubenswrapper[31456]: I0312 21:09:08.595163 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a67ecf3-823d-4948-a5cb-8bd1eb9f259c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-269gt\" (UID: \"4a67ecf3-823d-4948-a5cb-8bd1eb9f259c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-269gt" Mar 12 21:09:08.611841 master-0 kubenswrapper[31456]: I0312 21:09:08.611750 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwrw\" (UniqueName: \"kubernetes.io/projected/54184647-6e9a-43f7-90b1-5d8815f8b1ab-kube-api-access-kzwrw\") pod \"package-server-manager-854648ff6d-cdcc8\" (UID: \"54184647-6e9a-43f7-90b1-5d8815f8b1ab\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 21:09:08.632373 master-0 kubenswrapper[31456]: I0312 21:09:08.632316 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2bmh\" (UniqueName: \"kubernetes.io/projected/31747c5d-7e29-4a74-b8d5-3d8efa5e900b-kube-api-access-l2bmh\") pod \"dns-default-pp258\" (UID: \"31747c5d-7e29-4a74-b8d5-3d8efa5e900b\") " pod="openshift-dns/dns-default-pp258" Mar 12 21:09:08.651248 master-0 kubenswrapper[31456]: I0312 21:09:08.651132 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9txs\" (UniqueName: \"kubernetes.io/projected/d9152bd6-f203-469b-97fa-db274e43b40c-kube-api-access-q9txs\") pod \"machine-config-daemon-n5wh9\" (UID: \"d9152bd6-f203-469b-97fa-db274e43b40c\") " pod="openshift-machine-config-operator/machine-config-daemon-n5wh9" Mar 12 21:09:08.673393 master-0 kubenswrapper[31456]: I0312 21:09:08.673302 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hvwg\" (UniqueName: \"kubernetes.io/projected/ed1c4da2-564b-4354-a4ec-27b801098aa5-kube-api-access-2hvwg\") pod \"openshift-state-metrics-74cc79fd76-bdmlf\" (UID: \"ed1c4da2-564b-4354-a4ec-27b801098aa5\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-bdmlf" Mar 12 21:09:08.694995 master-0 kubenswrapper[31456]: I0312 21:09:08.694927 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rrv\" (UniqueName: \"kubernetes.io/projected/5471994f-769e-4124-b7d0-01f5358fc18f-kube-api-access-f7rrv\") pod \"etcd-operator-5884b9cd56-xh6r9\" (UID: \"5471994f-769e-4124-b7d0-01f5358fc18f\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-xh6r9" Mar 12 21:09:08.713886 master-0 kubenswrapper[31456]: I0312 21:09:08.713697 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhhdz\" (UniqueName: \"kubernetes.io/projected/8b96dd10-18a0-49f8-b488-63fc2b23da39-kube-api-access-nhhdz\") pod \"operator-controller-controller-manager-6598bfb6c4-hdd4n\" (UID: \"8b96dd10-18a0-49f8-b488-63fc2b23da39\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:08.734365 master-0 kubenswrapper[31456]: I0312 21:09:08.734282 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xth7s\" (UniqueName: \"kubernetes.io/projected/a539e1c7-3799-4d43-8f2f-d5e5c0ffd918-kube-api-access-xth7s\") pod \"ingress-canary-67vs7\" (UID: \"a539e1c7-3799-4d43-8f2f-d5e5c0ffd918\") " pod="openshift-ingress-canary/ingress-canary-67vs7" Mar 12 21:09:08.752389 master-0 kubenswrapper[31456]: I0312 21:09:08.752210 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlt7h\" (UniqueName: \"kubernetes.io/projected/52839a08-0871-44d3-9d22-b2f6b4383b99-kube-api-access-hlt7h\") pod \"tuned-btxk2\" (UID: \"52839a08-0871-44d3-9d22-b2f6b4383b99\") " pod="openshift-cluster-node-tuning-operator/tuned-btxk2" Mar 12 21:09:08.773407 master-0 kubenswrapper[31456]: I0312 21:09:08.773258 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9z6l\" (UniqueName: \"kubernetes.io/projected/226cb3a1-984f-4410-96e6-c007131dc074-kube-api-access-b9z6l\") pod \"cluster-olm-operator-77899cf6d-kbwlh\" (UID: \"226cb3a1-984f-4410-96e6-c007131dc074\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-kbwlh" Mar 12 21:09:08.793701 master-0 kubenswrapper[31456]: I0312 21:09:08.793529 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrm2z\" (UniqueName: \"kubernetes.io/projected/17d2bb40-74e2-4894-a884-7018952bdf71-kube-api-access-lrm2z\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:08.813236 master-0 kubenswrapper[31456]: I0312 21:09:08.812848 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbnbs\" (UniqueName: \"kubernetes.io/projected/32050f14-1939-41bf-a824-22016b90c189-kube-api-access-pbnbs\") pod \"cluster-samples-operator-664cb58b85-wjpf9\" (UID: \"32050f14-1939-41bf-a824-22016b90c189\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wjpf9" Mar 12 21:09:08.834167 master-0 kubenswrapper[31456]: I0312 21:09:08.833977 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfspc\" (UniqueName: \"kubernetes.io/projected/d4a162d4-8086-4bcf-854d-7e6cd37fd4c7-kube-api-access-mfspc\") pod \"csi-snapshot-controller-7577d6f48-8fk8w\" (UID: \"d4a162d4-8086-4bcf-854d-7e6cd37fd4c7\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-8fk8w" Mar 12 21:09:08.851243 master-0 kubenswrapper[31456]: I0312 21:09:08.851056 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp84p\" (UniqueName: \"kubernetes.io/projected/7667a111-e744-47b2-9603-3864347dc738-kube-api-access-mp84p\") pod \"node-exporter-lkmd7\" (UID: \"7667a111-e744-47b2-9603-3864347dc738\") " pod="openshift-monitoring/node-exporter-lkmd7" Mar 12 21:09:08.872510 master-0 kubenswrapper[31456]: I0312 21:09:08.872419 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2ph\" (UniqueName: \"kubernetes.io/projected/da40e787-dd75-4f4f-b09e-a8dab590f260-kube-api-access-xg2ph\") pod \"migrator-57ccdf9b5-jd4pv\" (UID: \"da40e787-dd75-4f4f-b09e-a8dab590f260\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-jd4pv" Mar 12 21:09:08.894088 master-0 kubenswrapper[31456]: I0312 21:09:08.893976 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf28c\" (UniqueName: \"kubernetes.io/projected/a3828a1d-8180-4c7b-b423-4488f7fc0b76-kube-api-access-lf28c\") pod \"router-default-79f8cd6fdd-hsv57\" (UID: \"a3828a1d-8180-4c7b-b423-4488f7fc0b76\") " pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:08.915359 master-0 kubenswrapper[31456]: I0312 21:09:08.915249 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7d5\" (UniqueName: \"kubernetes.io/projected/067fdca7-c61d-470c-8421-73e0b62df3e4-kube-api-access-tm7d5\") pod \"packageserver-659d778978-djtms\" (UID: \"067fdca7-c61d-470c-8421-73e0b62df3e4\") " pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:08.915502 master-0 kubenswrapper[31456]: I0312 21:09:08.915426 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:08.915573 master-0 kubenswrapper[31456]: I0312 21:09:08.915497 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.915897 master-0 kubenswrapper[31456]: I0312 21:09:08.915850 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d2bb40-74e2-4894-a884-7018952bdf71-config\") pod \"cluster-baremetal-operator-5cdb4c5598-fnxjc\" (UID: \"17d2bb40-74e2-4894-a884-7018952bdf71\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-fnxjc" Mar 12 21:09:08.915897 master-0 kubenswrapper[31456]: I0312 21:09:08.915864 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.916059 master-0 kubenswrapper[31456]: I0312 21:09:08.915967 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.916129 master-0 kubenswrapper[31456]: I0312 21:09:08.916088 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.916198 master-0 kubenswrapper[31456]: I0312 21:09:08.916089 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.916445 master-0 kubenswrapper[31456]: I0312 21:09:08.916235 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.916445 master-0 kubenswrapper[31456]: I0312 21:09:08.916389 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.916577 master-0 kubenswrapper[31456]: I0312 21:09:08.916505 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.916667 master-0 kubenswrapper[31456]: I0312 21:09:08.916615 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.916735 master-0 kubenswrapper[31456]: I0312 21:09:08.916662 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.916947 master-0 kubenswrapper[31456]: I0312 21:09:08.916889 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.917035 master-0 kubenswrapper[31456]: I0312 21:09:08.916945 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.917035 master-0 kubenswrapper[31456]: I0312 21:09:08.916911 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.917196 master-0 kubenswrapper[31456]: I0312 21:09:08.917079 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.917294 master-0 kubenswrapper[31456]: I0312 21:09:08.917264 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.917458 master-0 kubenswrapper[31456]: I0312 21:09:08.917410 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.917587 master-0 kubenswrapper[31456]: I0312 21:09:08.917545 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.917660 master-0 kubenswrapper[31456]: I0312 21:09:08.917588 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.917660 master-0 kubenswrapper[31456]: I0312 21:09:08.917637 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.917794 master-0 kubenswrapper[31456]: I0312 21:09:08.917733 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.919402 master-0 kubenswrapper[31456]: I0312 21:09:08.917798 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.919402 master-0 kubenswrapper[31456]: I0312 21:09:08.917925 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.919402 master-0 kubenswrapper[31456]: I0312 21:09:08.918066 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:08.919402 master-0 kubenswrapper[31456]: I0312 21:09:08.918385 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.934055 master-0 kubenswrapper[31456]: I0312 21:09:08.933999 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") pod \"route-controller-manager-8467b998d8-l9fvg\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:08.955630 master-0 kubenswrapper[31456]: I0312 21:09:08.955557 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsprq\" (UniqueName: \"kubernetes.io/projected/135ec6f3-fbc0-4840-a4b1-c1124c705161-kube-api-access-wsprq\") pod \"service-ca-84bfdbbb7f-4zjqp\" (UID: \"135ec6f3-fbc0-4840-a4b1-c1124c705161\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-4zjqp" Mar 12 21:09:08.973865 master-0 kubenswrapper[31456]: I0312 21:09:08.973776 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") pod \"metrics-server-5bbfd655db-2tsb8\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:08.993666 master-0 kubenswrapper[31456]: I0312 21:09:08.993603 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqbt\" (UniqueName: \"kubernetes.io/projected/cc7b96ab-01af-442a-8eda-fc59e665a367-kube-api-access-vwqbt\") pod \"network-check-source-7c67b67d47-bv4x6\" (UID: \"cc7b96ab-01af-442a-8eda-fc59e665a367\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-bv4x6" Mar 12 21:09:09.012081 master-0 kubenswrapper[31456]: I0312 21:09:09.012020 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt5q\" (UniqueName: \"kubernetes.io/projected/980191fe-c62c-4b9e-879c-38fa8ce0a58b-kube-api-access-2wt5q\") pod \"openshift-config-operator-64488f9d78-zsd76\" (UID: \"980191fe-c62c-4b9e-879c-38fa8ce0a58b\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:09.034977 master-0 kubenswrapper[31456]: I0312 21:09:09.034795 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrzs\" (UniqueName: \"kubernetes.io/projected/b71376ea-e248-48fc-b2c4-1de7236ddd31-kube-api-access-nlrzs\") pod \"cluster-autoscaler-operator-69576476f7-r6rcq\" (UID: \"b71376ea-e248-48fc-b2c4-1de7236ddd31\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-r6rcq" Mar 12 21:09:09.054071 master-0 kubenswrapper[31456]: I0312 21:09:09.054012 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp4mt\" (UniqueName: \"kubernetes.io/projected/f8467055-c9c9-4485-bb60-9a79e8b91268-kube-api-access-gp4mt\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-btpxl\" (UID: \"f8467055-c9c9-4485-bb60-9a79e8b91268\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-btpxl" Mar 12 21:09:09.073368 master-0 kubenswrapper[31456]: I0312 21:09:09.073309 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrk7w\" (UniqueName: \"kubernetes.io/projected/c3daeefa-7842-464c-a6c9-01b44ebea477-kube-api-access-jrk7w\") pod \"ovnkube-node-nhrpd\" (UID: \"c3daeefa-7842-464c-a6c9-01b44ebea477\") " pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:09.094841 master-0 kubenswrapper[31456]: I0312 21:09:09.094737 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhcsd\" (UniqueName: \"kubernetes.io/projected/07330030-487d-4fa6-b5c3-67607355bbba-kube-api-access-bhcsd\") pod \"olm-operator-d64cfc9db-q9hnk\" (UID: \"07330030-487d-4fa6-b5c3-67607355bbba\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 21:09:09.114415 master-0 kubenswrapper[31456]: I0312 21:09:09.114333 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lltk\" (UniqueName: \"kubernetes.io/projected/981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9-kube-api-access-2lltk\") pod \"cluster-node-tuning-operator-66c7586884-69rp9\" (UID: \"981da73f-fc4b-4c1d-bcc0-bf8aeebab2c9\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-69rp9" Mar 12 21:09:09.134706 master-0 kubenswrapper[31456]: I0312 21:09:09.134641 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvf6\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-kube-api-access-8vvf6\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:09.153095 master-0 kubenswrapper[31456]: I0312 21:09:09.153022 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bk7q\" (UniqueName: \"kubernetes.io/projected/a2545a80-0f00-4b19-ab3b-a9aa4bff98e8-kube-api-access-7bk7q\") pod \"multus-additional-cni-plugins-trlxw\" (UID: \"a2545a80-0f00-4b19-ab3b-a9aa4bff98e8\") " pod="openshift-multus/multus-additional-cni-plugins-trlxw" Mar 12 21:09:09.172832 master-0 kubenswrapper[31456]: I0312 21:09:09.172742 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gg7v\" (UniqueName: \"kubernetes.io/projected/4ebc9ee1-3913-4112-bb3f-c79f2c08032b-kube-api-access-7gg7v\") pod \"kube-state-metrics-68b88f8cb5-4tfmr\" (UID: \"4ebc9ee1-3913-4112-bb3f-c79f2c08032b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-4tfmr" Mar 12 21:09:09.193264 master-0 kubenswrapper[31456]: I0312 21:09:09.192593 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l2sm\" (UniqueName: \"kubernetes.io/projected/ea339fe1-c013-4c4b-90c9-aaaa7eb40d99-kube-api-access-4l2sm\") pod \"prometheus-operator-5ff8674d55-8fpdl\" (UID: \"ea339fe1-c013-4c4b-90c9-aaaa7eb40d99\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-8fpdl" Mar 12 21:09:09.210617 master-0 kubenswrapper[31456]: I0312 21:09:09.210552 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5c6t\" (UniqueName: \"kubernetes.io/projected/e624e623-6d59-444d-b548-165fa5fd2581-kube-api-access-c5c6t\") pod \"marketplace-operator-64bf9778cb-hxqgw\" (UID: \"e624e623-6d59-444d-b548-165fa5fd2581\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:09.236675 master-0 kubenswrapper[31456]: I0312 21:09:09.236595 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8qp\" (UniqueName: \"kubernetes.io/projected/d6eace9f-a52d-4570-a932-959538e1f2bc-kube-api-access-8l8qp\") pod \"redhat-marketplace-66qvj\" (UID: \"d6eace9f-a52d-4570-a932-959538e1f2bc\") " pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:09.248314 master-0 kubenswrapper[31456]: I0312 21:09:09.248251 31456 request.go:700] Waited for 3.961014663s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token Mar 12 21:09:09.251652 master-0 kubenswrapper[31456]: I0312 21:09:09.251578 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83368183-0368-44b1-9387-eed32b211988-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-g4bkd\" (UID: \"83368183-0368-44b1-9387-eed32b211988\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-g4bkd" Mar 12 21:09:09.274442 master-0 kubenswrapper[31456]: I0312 21:09:09.274374 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddw4\" (UniqueName: \"kubernetes.io/projected/e03d34d0-f7c1-4dcf-8b84-89ad647cc10f-kube-api-access-8ddw4\") pod \"control-plane-machine-set-operator-6686554ddc-xzwfp\" (UID: \"e03d34d0-f7c1-4dcf-8b84-89ad647cc10f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-xzwfp" Mar 12 21:09:09.292092 master-0 kubenswrapper[31456]: I0312 21:09:09.291967 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w68c\" (UniqueName: \"kubernetes.io/projected/a3bebf49-1d92-4353-b84c-91ed86b7bb94-kube-api-access-2w68c\") pod \"authentication-operator-7c6989d6c4-9j7rx\" (UID: \"a3bebf49-1d92-4353-b84c-91ed86b7bb94\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-9j7rx" Mar 12 21:09:09.312053 master-0 kubenswrapper[31456]: I0312 21:09:09.311987 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8745n\" (UniqueName: \"kubernetes.io/projected/7f3afe47-c537-420c-b5be-1cad612e119d-kube-api-access-8745n\") pod \"cluster-storage-operator-6fbfc8dc8f-ftxzs\" (UID: \"7f3afe47-c537-420c-b5be-1cad612e119d\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-ftxzs" Mar 12 21:09:09.332139 master-0 kubenswrapper[31456]: I0312 21:09:09.332062 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clp9l\" (UniqueName: \"kubernetes.io/projected/2604b035-853c-42b7-a562-07d46178868a-kube-api-access-clp9l\") pod \"csi-snapshot-controller-operator-5685fbc7d-kf949\" (UID: \"2604b035-853c-42b7-a562-07d46178868a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-kf949" Mar 12 21:09:09.352305 master-0 kubenswrapper[31456]: I0312 21:09:09.352220 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt627\" (UniqueName: \"kubernetes.io/projected/400a13b5-c489-4beb-af33-94e635b86148-kube-api-access-vt627\") pod \"machine-approver-754bdc9f9d-hj9bb\" (UID: \"400a13b5-c489-4beb-af33-94e635b86148\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-hj9bb" Mar 12 21:09:09.381549 master-0 kubenswrapper[31456]: I0312 21:09:09.381461 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") pod \"controller-manager-759579d7c9-wjl25\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:09.401648 master-0 kubenswrapper[31456]: I0312 21:09:09.401560 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpf99\" (UniqueName: \"kubernetes.io/projected/67e68ff0-f54d-4973-bbe7-ed43ce542bc0-kube-api-access-tpf99\") pod \"machine-api-operator-84bf6db4f9-sh67s\" (UID: \"67e68ff0-f54d-4973-bbe7-ed43ce542bc0\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-sh67s" Mar 12 21:09:09.415919 master-0 kubenswrapper[31456]: I0312 21:09:09.415745 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xxkr\" (UniqueName: \"kubernetes.io/projected/05fd1378-3935-4caf-96c5-17cf7e29417f-kube-api-access-8xxkr\") pod \"cloud-credential-operator-55d85b7b47-j79ht\" (UID: \"05fd1378-3935-4caf-96c5-17cf7e29417f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-j79ht" Mar 12 21:09:09.434436 master-0 kubenswrapper[31456]: I0312 21:09:09.434292 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqfz\" (UniqueName: \"kubernetes.io/projected/4c589179-0df4-4fe8-bfdd-965c3e7652c5-kube-api-access-pbqfz\") pod \"certified-operators-94rll\" (UID: \"4c589179-0df4-4fe8-bfdd-965c3e7652c5\") " pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:09.440918 master-0 kubenswrapper[31456]: I0312 21:09:09.440838 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b71f537-1cc2-4645-8e50-23941635457c-bound-sa-token\") pod \"ingress-operator-677db989d6-qpf68\" (UID: \"2b71f537-1cc2-4645-8e50-23941635457c\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-qpf68" Mar 12 21:09:09.463939 master-0 kubenswrapper[31456]: I0312 21:09:09.463858 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n555w\" (UniqueName: \"kubernetes.io/projected/a5d1e064-c12b-4c1d-b499-4e301ca8a8dc-kube-api-access-n555w\") pod \"insights-operator-8f89dfddd-lc7jk\" (UID: \"a5d1e064-c12b-4c1d-b499-4e301ca8a8dc\") " pod="openshift-insights/insights-operator-8f89dfddd-lc7jk" Mar 12 21:09:09.481385 master-0 kubenswrapper[31456]: I0312 21:09:09.481326 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jzt\" (UniqueName: \"kubernetes.io/projected/508cb83e-6f25-4235-8c56-b25b762ebcad-kube-api-access-s4jzt\") pod \"machine-config-operator-fdb5c78b5-7p8w8\" (UID: \"508cb83e-6f25-4235-8c56-b25b762ebcad\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-7p8w8" Mar 12 21:09:09.512843 master-0 kubenswrapper[31456]: I0312 21:09:09.512011 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkp7\" (UniqueName: \"kubernetes.io/projected/900228dd-2d21-4759-87da-b027b0134ad8-kube-api-access-rvkp7\") pod \"cluster-image-registry-operator-86d6d77c7c-hmtz5\" (UID: \"900228dd-2d21-4759-87da-b027b0134ad8\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-hmtz5" Mar 12 21:09:09.530869 master-0 kubenswrapper[31456]: I0312 21:09:09.530798 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hp5\" (UniqueName: \"kubernetes.io/projected/cf33c432-db42-4c6d-8ee4-f089e5bf8203-kube-api-access-x8hp5\") pod \"catalogd-controller-manager-7f8b8b6f4c-zgjqw\" (UID: \"cf33c432-db42-4c6d-8ee4-f089e5bf8203\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:09.555798 master-0 kubenswrapper[31456]: I0312 21:09:09.555679 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258hz\" (UniqueName: \"kubernetes.io/projected/98d99166-c42a-4169-87e8-4209570aec50-kube-api-access-258hz\") pod \"catalog-operator-7d9c49f57b-tpvl4\" (UID: \"98d99166-c42a-4169-87e8-4209570aec50\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 21:09:09.571824 master-0 kubenswrapper[31456]: I0312 21:09:09.571750 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7lq\" (UniqueName: \"kubernetes.io/projected/855747e5-d9b4-4eef-8bc4-425d6a8e95c7-kube-api-access-6j7lq\") pod \"dns-operator-589895fbb7-tvrxp\" (UID: \"855747e5-d9b4-4eef-8bc4-425d6a8e95c7\") " pod="openshift-dns-operator/dns-operator-589895fbb7-tvrxp" Mar 12 21:09:09.593546 master-0 kubenswrapper[31456]: I0312 21:09:09.593505 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlch7\" (UniqueName: \"kubernetes.io/projected/c8660437-633f-4132-8a61-fe998abb493e-kube-api-access-zlch7\") pod \"network-metrics-daemon-brdcd\" (UID: \"c8660437-633f-4132-8a61-fe998abb493e\") " pod="openshift-multus/network-metrics-daemon-brdcd" Mar 12 21:09:09.604741 master-0 kubenswrapper[31456]: I0312 21:09:09.604678 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xld\" (UniqueName: \"kubernetes.io/projected/07542516-49c8-4e20-9b97-798fbff850a5-kube-api-access-z9xld\") pod \"kube-storage-version-migrator-operator-7f65c457f5-qfbrj\" (UID: \"07542516-49c8-4e20-9b97-798fbff850a5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-qfbrj" Mar 12 21:09:09.634155 master-0 kubenswrapper[31456]: I0312 21:09:09.634115 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5m2\" (UniqueName: \"kubernetes.io/projected/b7229c42-b6bc-4ea9-946c-71a4117f53e9-kube-api-access-xx5m2\") pod \"redhat-operators-gxjmz\" (UID: \"b7229c42-b6bc-4ea9-946c-71a4117f53e9\") " pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:09.651673 master-0 kubenswrapper[31456]: I0312 21:09:09.651636 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") pod \"multus-admission-controller-7769569c45-tgbjx\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:09:09.670603 master-0 kubenswrapper[31456]: I0312 21:09:09.670490 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmzz\" (UniqueName: \"kubernetes.io/projected/25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce-kube-api-access-vcmzz\") pod \"node-resolver-9t4hh\" (UID: \"25866ff2-ce38-4bc0-83d9-0d85b8c6b0ce\") " pod="openshift-dns/node-resolver-9t4hh" Mar 12 21:09:09.687971 master-0 kubenswrapper[31456]: I0312 21:09:09.687896 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csxwl\" (UniqueName: \"kubernetes.io/projected/5ad63582-bd60-41a1-9622-ee73ccf8a5e8-kube-api-access-csxwl\") pod \"network-check-target-h26wj\" (UID: \"5ad63582-bd60-41a1-9622-ee73ccf8a5e8\") " pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 21:09:09.713374 master-0 kubenswrapper[31456]: I0312 21:09:09.713319 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5v9f\" (UniqueName: \"kubernetes.io/projected/02649264-040a-41a6-9a41-8bf6416c68ff-kube-api-access-k5v9f\") pod \"cluster-monitoring-operator-674cbfbd9d-j9tpt\" (UID: \"02649264-040a-41a6-9a41-8bf6416c68ff\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-j9tpt" Mar 12 21:09:09.731616 master-0 kubenswrapper[31456]: I0312 21:09:09.731535 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx64q\" (UniqueName: \"kubernetes.io/projected/d862a346-ec4d-46f6-a3e2-ea8759ea0111-kube-api-access-jx64q\") pod \"ovnkube-control-plane-66b55d57d-vq95t\" (UID: \"d862a346-ec4d-46f6-a3e2-ea8759ea0111\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-vq95t" Mar 12 21:09:09.752716 master-0 kubenswrapper[31456]: E0312 21:09:09.752680 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:09.752895 master-0 kubenswrapper[31456]: E0312 21:09:09.752876 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:09.753057 master-0 kubenswrapper[31456]: E0312 21:09:09.753040 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:10.253016712 +0000 UTC m=+11.327622050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:09.765557 master-0 kubenswrapper[31456]: E0312 21:09:09.765262 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 12 21:09:09.805400 master-0 kubenswrapper[31456]: E0312 21:09:09.805327 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:09.805757 master-0 kubenswrapper[31456]: I0312 21:09:09.805730 31456 scope.go:117] "RemoveContainer" containerID="1867cbd1eea641a204f5d8db13d19bc48d06f54cf7a7cbc0d8d91fbb925b3a69" Mar 12 21:09:09.838823 master-0 kubenswrapper[31456]: E0312 21:09:09.837236 31456 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.575s" Mar 12 21:09:09.843892 master-0 kubenswrapper[31456]: I0312 21:09:09.843859 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 12 21:09:09.856669 master-0 kubenswrapper[31456]: I0312 21:09:09.856638 31456 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 12 21:09:09.856841 master-0 kubenswrapper[31456]: I0312 21:09:09.856703 31456 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 12 21:09:09.889857 master-0 kubenswrapper[31456]: I0312 21:09:09.889780 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:09.890068 master-0 kubenswrapper[31456]: I0312 21:09:09.889873 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:09.890068 master-0 kubenswrapper[31456]: I0312 21:09:09.889891 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"1867cbd1eea641a204f5d8db13d19bc48d06f54cf7a7cbc0d8d91fbb925b3a69"} Mar 12 21:09:09.890068 master-0 kubenswrapper[31456]: I0312 21:09:09.890018 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:09.890068 master-0 kubenswrapper[31456]: I0312 21:09:09.890037 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 21:09:09.890068 master-0 kubenswrapper[31456]: I0312 21:09:09.890070 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 12 21:09:09.890285 master-0 kubenswrapper[31456]: I0312 21:09:09.890085 31456 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="33cdd0bf-9c54-42b1-a5a4-7c5725708df2" Mar 12 21:09:09.890563 master-0 kubenswrapper[31456]: I0312 21:09:09.890518 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmtk" Mar 12 21:09:09.890619 master-0 kubenswrapper[31456]: I0312 21:09:09.890607 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:09.890663 master-0 kubenswrapper[31456]: I0312 21:09:09.890643 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:09.890710 master-0 kubenswrapper[31456]: I0312 21:09:09.890671 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:09.890710 master-0 kubenswrapper[31456]: I0312 21:09:09.890690 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:09.891653 master-0 kubenswrapper[31456]: I0312 21:09:09.891621 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pp258" Mar 12 21:09:09.891803 master-0 kubenswrapper[31456]: I0312 21:09:09.891781 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:09.891883 master-0 kubenswrapper[31456]: I0312 21:09:09.891857 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pp258" Mar 12 21:09:09.892012 master-0 kubenswrapper[31456]: I0312 21:09:09.891903 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:09.892098 master-0 kubenswrapper[31456]: I0312 21:09:09.892078 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:09.892162 master-0 kubenswrapper[31456]: I0312 21:09:09.892151 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:09.892236 master-0 kubenswrapper[31456]: I0312 21:09:09.892174 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-zsd76" Mar 12 21:09:09.892601 master-0 kubenswrapper[31456]: I0312 21:09:09.892442 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:09.893292 master-0 kubenswrapper[31456]: I0312 21:09:09.893268 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:09.893926 master-0 kubenswrapper[31456]: I0312 21:09:09.893896 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:09.894005 master-0 kubenswrapper[31456]: I0312 21:09:09.893944 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:09:09.894049 master-0 kubenswrapper[31456]: I0312 21:09:09.894008 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:09.894049 master-0 kubenswrapper[31456]: I0312 21:09:09.894036 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:09.894132 master-0 kubenswrapper[31456]: I0312 21:09:09.894088 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:09.894132 master-0 kubenswrapper[31456]: I0312 21:09:09.894109 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 21:09:09.894132 master-0 kubenswrapper[31456]: I0312 21:09:09.894127 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-hxqgw" Mar 12 21:09:09.894254 master-0 kubenswrapper[31456]: I0312 21:09:09.894140 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-q9hnk" Mar 12 21:09:09.894254 master-0 kubenswrapper[31456]: I0312 21:09:09.894164 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:09.894332 master-0 kubenswrapper[31456]: I0312 21:09:09.894271 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:09.894727 master-0 kubenswrapper[31456]: I0312 21:09:09.894703 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:09:09.894798 master-0 kubenswrapper[31456]: I0312 21:09:09.894743 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:09.894798 master-0 kubenswrapper[31456]: I0312 21:09:09.894769 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 21:09:09.894798 master-0 kubenswrapper[31456]: I0312 21:09:09.894796 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 21:09:09.894931 master-0 kubenswrapper[31456]: I0312 21:09:09.894835 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:09.894931 master-0 kubenswrapper[31456]: I0312 21:09:09.894857 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:09.894931 master-0 kubenswrapper[31456]: I0312 21:09:09.894878 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:09.894931 master-0 kubenswrapper[31456]: I0312 21:09:09.894893 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-h26wj" Mar 12 21:09:09.894931 master-0 kubenswrapper[31456]: I0312 21:09:09.894908 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-zgjqw" Mar 12 21:09:09.894931 master-0 kubenswrapper[31456]: I0312 21:09:09.894923 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-tpvl4" Mar 12 21:09:09.941852 master-0 kubenswrapper[31456]: I0312 21:09:09.941774 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:09.942213 master-0 kubenswrapper[31456]: I0312 21:09:09.942175 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:10.041160 master-0 kubenswrapper[31456]: I0312 21:09:10.041129 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:10.044421 master-0 kubenswrapper[31456]: I0312 21:09:10.044360 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-659d778978-djtms" Mar 12 21:09:10.109363 master-0 kubenswrapper[31456]: I0312 21:09:10.109275 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:10.109363 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:10.109363 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:10.109363 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:10.109900 master-0 kubenswrapper[31456]: I0312 21:09:10.109356 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:10.139390 master-0 kubenswrapper[31456]: I0312 21:09:10.139334 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 12 21:09:10.159101 master-0 kubenswrapper[31456]: I0312 21:09:10.159062 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 12 21:09:10.349590 master-0 kubenswrapper[31456]: I0312 21:09:10.349544 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:10.350194 master-0 kubenswrapper[31456]: E0312 21:09:10.349740 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:10.350194 master-0 kubenswrapper[31456]: E0312 21:09:10.349787 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:10.350194 master-0 kubenswrapper[31456]: E0312 21:09:10.349892 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:11.349865743 +0000 UTC m=+12.424471111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:10.397213 master-0 kubenswrapper[31456]: I0312 21:09:10.397042 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:10.456876 master-0 kubenswrapper[31456]: I0312 21:09:10.456735 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:10.515789 master-0 kubenswrapper[31456]: I0312 21:09:10.515728 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 12 21:09:10.519879 master-0 kubenswrapper[31456]: I0312 21:09:10.519786 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:10.520690 master-0 kubenswrapper[31456]: I0312 21:09:10.520636 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5"} Mar 12 21:09:11.108516 master-0 kubenswrapper[31456]: I0312 21:09:11.108429 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:11.108516 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:11.108516 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:11.108516 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:11.108516 master-0 kubenswrapper[31456]: I0312 21:09:11.108503 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:11.374604 master-0 kubenswrapper[31456]: I0312 21:09:11.374451 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:11.375420 master-0 kubenswrapper[31456]: E0312 21:09:11.374753 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:11.375420 master-0 kubenswrapper[31456]: E0312 21:09:11.374831 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:11.375420 master-0 kubenswrapper[31456]: E0312 21:09:11.374937 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:13.374895619 +0000 UTC m=+14.449500977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:11.524116 master-0 kubenswrapper[31456]: I0312 21:09:11.524052 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:11.573131 master-0 kubenswrapper[31456]: I0312 21:09:11.573044 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:11.573510 master-0 kubenswrapper[31456]: I0312 21:09:11.573229 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:11.577127 master-0 kubenswrapper[31456]: I0312 21:09:11.577083 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:11.632616 master-0 kubenswrapper[31456]: I0312 21:09:11.632443 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:11.632616 master-0 kubenswrapper[31456]: I0312 21:09:11.632617 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:11.637150 master-0 kubenswrapper[31456]: I0312 21:09:11.637101 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 12 21:09:12.063677 master-0 kubenswrapper[31456]: I0312 21:09:12.063428 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=17.063407834 podStartE2EDuration="17.063407834s" podCreationTimestamp="2026-03-12 21:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:09:12.061296374 +0000 UTC m=+13.135901712" watchObservedRunningTime="2026-03-12 21:09:12.063407834 +0000 UTC m=+13.138013182" Mar 12 21:09:12.108678 master-0 kubenswrapper[31456]: I0312 21:09:12.108586 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:12.108678 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:12.108678 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:12.108678 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:12.108678 master-0 kubenswrapper[31456]: I0312 21:09:12.108655 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:12.312668 master-0 kubenswrapper[31456]: I0312 21:09:12.312517 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=8.312492822 podStartE2EDuration="8.312492822s" podCreationTimestamp="2026-03-12 21:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:09:12.308043645 +0000 UTC m=+13.382649003" watchObservedRunningTime="2026-03-12 21:09:12.312492822 +0000 UTC m=+13.387098190" Mar 12 21:09:12.541477 master-0 kubenswrapper[31456]: I0312 21:09:12.541391 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:12.561202 master-0 kubenswrapper[31456]: I0312 21:09:12.547005 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:12.735164 master-0 kubenswrapper[31456]: I0312 21:09:12.735031 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:13.108418 master-0 kubenswrapper[31456]: I0312 21:09:13.108361 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:13.108418 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:13.108418 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:13.108418 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:13.108705 master-0 kubenswrapper[31456]: I0312 21:09:13.108441 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:13.403487 master-0 kubenswrapper[31456]: I0312 21:09:13.403350 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:13.403754 master-0 kubenswrapper[31456]: I0312 21:09:13.403517 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:13.403754 master-0 kubenswrapper[31456]: E0312 21:09:13.403729 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:13.403754 master-0 kubenswrapper[31456]: E0312 21:09:13.403749 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:13.403904 master-0 kubenswrapper[31456]: E0312 21:09:13.403862 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:17.40378378 +0000 UTC m=+18.478389108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:13.409151 master-0 kubenswrapper[31456]: I0312 21:09:13.409120 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:09:13.509395 master-0 kubenswrapper[31456]: I0312 21:09:13.509027 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7946996f87-nzb7c" Mar 12 21:09:13.509611 master-0 kubenswrapper[31456]: I0312 21:09:13.509508 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-84fb785f4-kl52q" Mar 12 21:09:13.519860 master-0 kubenswrapper[31456]: I0312 21:09:13.519777 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 21:09:13.525082 master-0 kubenswrapper[31456]: I0312 21:09:13.525036 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-cdcc8" Mar 12 21:09:13.629264 master-0 kubenswrapper[31456]: I0312 21:09:13.629210 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:14.109347 master-0 kubenswrapper[31456]: I0312 21:09:14.109299 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:14.109347 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:14.109347 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:14.109347 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:14.109645 master-0 kubenswrapper[31456]: I0312 21:09:14.109359 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:14.510934 master-0 kubenswrapper[31456]: I0312 21:09:14.510824 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:14.553196 master-0 kubenswrapper[31456]: I0312 21:09:14.553135 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:14.590490 master-0 kubenswrapper[31456]: I0312 21:09:14.590448 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jblsg" Mar 12 21:09:14.729564 master-0 kubenswrapper[31456]: I0312 21:09:14.729488 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:14.771208 master-0 kubenswrapper[31456]: I0312 21:09:14.771091 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:15.108102 master-0 kubenswrapper[31456]: I0312 21:09:15.108046 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:15.108102 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:15.108102 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:15.108102 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:15.108424 master-0 kubenswrapper[31456]: I0312 21:09:15.108133 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:15.568342 master-0 kubenswrapper[31456]: I0312 21:09:15.568281 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:15.568342 master-0 kubenswrapper[31456]: I0312 21:09:15.568322 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:16.108699 master-0 kubenswrapper[31456]: I0312 21:09:16.108625 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:16.108699 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:16.108699 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:16.108699 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:16.109256 master-0 kubenswrapper[31456]: I0312 21:09:16.108716 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:16.114927 master-0 kubenswrapper[31456]: I0312 21:09:16.114891 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:17.110409 master-0 kubenswrapper[31456]: I0312 21:09:17.110359 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:17.110409 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:17.110409 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:17.110409 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:17.110974 master-0 kubenswrapper[31456]: I0312 21:09:17.110457 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:17.196510 master-0 kubenswrapper[31456]: I0312 21:09:17.196462 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:17.197450 master-0 kubenswrapper[31456]: I0312 21:09:17.197408 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-hdd4n" Mar 12 21:09:17.465358 master-0 kubenswrapper[31456]: I0312 21:09:17.465214 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:17.465536 master-0 kubenswrapper[31456]: E0312 21:09:17.465460 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:17.465536 master-0 kubenswrapper[31456]: E0312 21:09:17.465483 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:17.465536 master-0 kubenswrapper[31456]: E0312 21:09:17.465534 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:25.465517498 +0000 UTC m=+26.540122836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:17.645204 master-0 kubenswrapper[31456]: I0312 21:09:17.645155 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:17.645394 master-0 kubenswrapper[31456]: I0312 21:09:17.645336 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:17.645394 master-0 kubenswrapper[31456]: I0312 21:09:17.645365 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:17.682276 master-0 kubenswrapper[31456]: I0312 21:09:17.682236 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:18.110781 master-0 kubenswrapper[31456]: I0312 21:09:18.110735 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:18.110781 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:18.110781 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:18.110781 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:18.111419 master-0 kubenswrapper[31456]: I0312 21:09:18.111391 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:18.596873 master-0 kubenswrapper[31456]: I0312 21:09:18.596833 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:19.107800 master-0 kubenswrapper[31456]: I0312 21:09:19.107738 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:19.107800 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:19.107800 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:19.107800 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:19.108142 master-0 kubenswrapper[31456]: I0312 21:09:19.107837 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:19.455419 master-0 kubenswrapper[31456]: I0312 21:09:19.455323 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-66qvj" Mar 12 21:09:19.754385 master-0 kubenswrapper[31456]: I0312 21:09:19.754290 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-94rll" Mar 12 21:09:19.780491 master-0 kubenswrapper[31456]: I0312 21:09:19.780441 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gxjmz" Mar 12 21:09:19.977511 master-0 kubenswrapper[31456]: I0312 21:09:19.977444 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:19.977735 master-0 kubenswrapper[31456]: I0312 21:09:19.977614 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:09:19.998767 master-0 kubenswrapper[31456]: I0312 21:09:19.998728 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nhrpd" Mar 12 21:09:20.108198 master-0 kubenswrapper[31456]: I0312 21:09:20.108126 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:20.108198 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:20.108198 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:20.108198 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:20.108198 master-0 kubenswrapper[31456]: I0312 21:09:20.108183 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:21.109144 master-0 kubenswrapper[31456]: I0312 21:09:21.108996 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:21.109144 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:21.109144 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:21.109144 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:21.110118 master-0 kubenswrapper[31456]: I0312 21:09:21.109186 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:22.109446 master-0 kubenswrapper[31456]: I0312 21:09:22.109336 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:22.109446 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:22.109446 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:22.109446 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:22.110852 master-0 kubenswrapper[31456]: I0312 21:09:22.109450 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:23.109232 master-0 kubenswrapper[31456]: I0312 21:09:23.109137 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:23.109232 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:23.109232 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:23.109232 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:23.110212 master-0 kubenswrapper[31456]: I0312 21:09:23.109270 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:23.638955 master-0 kubenswrapper[31456]: I0312 21:09:23.638891 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:09:24.108484 master-0 kubenswrapper[31456]: I0312 21:09:24.108439 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:24.108484 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:24.108484 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:24.108484 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:24.109053 master-0 kubenswrapper[31456]: I0312 21:09:24.109015 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:25.108666 master-0 kubenswrapper[31456]: I0312 21:09:25.108595 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:25.108666 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:25.108666 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:25.108666 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:25.109362 master-0 kubenswrapper[31456]: I0312 21:09:25.108719 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:25.483943 master-0 kubenswrapper[31456]: I0312 21:09:25.483752 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:25.484348 master-0 kubenswrapper[31456]: E0312 21:09:25.484135 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:25.484348 master-0 kubenswrapper[31456]: E0312 21:09:25.484167 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:25.484348 master-0 kubenswrapper[31456]: E0312 21:09:25.484234 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:41.484211595 +0000 UTC m=+42.558816953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:26.109361 master-0 kubenswrapper[31456]: I0312 21:09:26.109282 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:26.109361 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:26.109361 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:26.109361 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:26.110435 master-0 kubenswrapper[31456]: I0312 21:09:26.109367 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:26.667059 master-0 kubenswrapper[31456]: I0312 21:09:26.666935 31456 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:09:26.667420 master-0 kubenswrapper[31456]: I0312 21:09:26.667303 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec" gracePeriod=5 Mar 12 21:09:27.108731 master-0 kubenswrapper[31456]: I0312 21:09:27.108642 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:27.108731 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:27.108731 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:27.108731 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:27.109237 master-0 kubenswrapper[31456]: I0312 21:09:27.108732 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:28.108929 master-0 kubenswrapper[31456]: I0312 21:09:28.108846 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:28.108929 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:28.108929 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:28.108929 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:28.109594 master-0 kubenswrapper[31456]: I0312 21:09:28.108942 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:29.109652 master-0 kubenswrapper[31456]: I0312 21:09:29.109592 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:29.109652 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:29.109652 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:29.109652 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:29.110405 master-0 kubenswrapper[31456]: I0312 21:09:29.109663 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:30.107888 master-0 kubenswrapper[31456]: I0312 21:09:30.107792 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:30.107888 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:30.107888 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:30.107888 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:30.107888 master-0 kubenswrapper[31456]: I0312 21:09:30.107875 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:31.108429 master-0 kubenswrapper[31456]: I0312 21:09:31.108368 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:31.108429 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:31.108429 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:31.108429 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:31.109522 master-0 kubenswrapper[31456]: I0312 21:09:31.108437 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:32.109135 master-0 kubenswrapper[31456]: I0312 21:09:32.109074 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:32.109135 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:32.109135 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:32.109135 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:32.110284 master-0 kubenswrapper[31456]: I0312 21:09:32.110235 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:32.245845 master-0 kubenswrapper[31456]: I0312 21:09:32.245792 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 12 21:09:32.246162 master-0 kubenswrapper[31456]: I0312 21:09:32.246146 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:32.393750 master-0 kubenswrapper[31456]: I0312 21:09:32.393625 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 21:09:32.393750 master-0 kubenswrapper[31456]: I0312 21:09:32.393695 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 21:09:32.394010 master-0 kubenswrapper[31456]: I0312 21:09:32.393765 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 21:09:32.394010 master-0 kubenswrapper[31456]: I0312 21:09:32.393769 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:32.394010 master-0 kubenswrapper[31456]: I0312 21:09:32.393834 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:32.394010 master-0 kubenswrapper[31456]: I0312 21:09:32.393908 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 21:09:32.394010 master-0 kubenswrapper[31456]: I0312 21:09:32.393930 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 12 21:09:32.394010 master-0 kubenswrapper[31456]: I0312 21:09:32.393937 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:32.394205 master-0 kubenswrapper[31456]: I0312 21:09:32.394059 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:32.394385 master-0 kubenswrapper[31456]: I0312 21:09:32.394325 31456 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:32.394385 master-0 kubenswrapper[31456]: I0312 21:09:32.394349 31456 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:32.394385 master-0 kubenswrapper[31456]: I0312 21:09:32.394358 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:32.394385 master-0 kubenswrapper[31456]: I0312 21:09:32.394367 31456 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:32.398480 master-0 kubenswrapper[31456]: I0312 21:09:32.398443 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:09:32.495924 master-0 kubenswrapper[31456]: I0312 21:09:32.495859 31456 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:09:32.707515 master-0 kubenswrapper[31456]: I0312 21:09:32.707381 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 12 21:09:32.707515 master-0 kubenswrapper[31456]: I0312 21:09:32.707477 31456 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec" exitCode=137 Mar 12 21:09:32.707797 master-0 kubenswrapper[31456]: I0312 21:09:32.707561 31456 scope.go:117] "RemoveContainer" containerID="2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec" Mar 12 21:09:32.707797 master-0 kubenswrapper[31456]: I0312 21:09:32.707644 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:09:32.771200 master-0 kubenswrapper[31456]: I0312 21:09:32.738063 31456 scope.go:117] "RemoveContainer" containerID="2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec" Mar 12 21:09:32.772913 master-0 kubenswrapper[31456]: E0312 21:09:32.772836 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec\": container with ID starting with 2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec not found: ID does not exist" containerID="2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec" Mar 12 21:09:32.773131 master-0 kubenswrapper[31456]: I0312 21:09:32.772922 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec"} err="failed to get container status \"2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec\": rpc error: code = NotFound desc = could not find container \"2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec\": container with ID starting with 2856d5840548c1bc6c65248c16a64600f315dc0e994bef020e791573a50dc5ec not found: ID does not exist" Mar 12 21:09:32.847843 master-0 kubenswrapper[31456]: I0312 21:09:32.847236 31456 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c5cd1163-c6f3-429f-928d-63c66680eaf4" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849599 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-2lj8z"] Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: E0312 21:09:32.849864 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367123ca-5a21-415c-8ac2-6d875696536b" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849880 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="367123ca-5a21-415c-8ac2-6d875696536b" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: E0312 21:09:32.849905 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849915 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: E0312 21:09:32.849926 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849934 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: E0312 21:09:32.849944 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849953 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: E0312 21:09:32.849968 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849974 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: E0312 21:09:32.849988 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerName="installer" Mar 12 21:09:32.850200 master-0 kubenswrapper[31456]: I0312 21:09:32.849995 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerName="installer" Mar 12 21:09:32.851523 master-0 kubenswrapper[31456]: E0312 21:09:32.851484 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e5a97-fb81-4609-8538-c55a8e2db411" containerName="installer" Mar 12 21:09:32.851523 master-0 kubenswrapper[31456]: I0312 21:09:32.851499 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e5a97-fb81-4609-8538-c55a8e2db411" containerName="installer" Mar 12 21:09:32.851523 master-0 kubenswrapper[31456]: E0312 21:09:32.851511 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 21:09:32.851523 master-0 kubenswrapper[31456]: I0312 21:09:32.851517 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 21:09:32.851523 master-0 kubenswrapper[31456]: E0312 21:09:32.851526 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222b53b1-7e5c-49d5-9795-fec4d0547398" containerName="installer" Mar 12 21:09:32.851523 master-0 kubenswrapper[31456]: I0312 21:09:32.851532 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="222b53b1-7e5c-49d5-9795-fec4d0547398" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: E0312 21:09:32.851540 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954fe7f9-e138-49ab-ab8e-504b75914100" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851547 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="954fe7f9-e138-49ab-ab8e-504b75914100" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851662 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="367123ca-5a21-415c-8ac2-6d875696536b" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851683 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="869e3d2a-1b5c-426f-945a-ddd44a9a5033" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851693 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d69687f-b8a5-4643-8268-ce30df5db3bc" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851704 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6afe7e-de9d-41d3-8e34-9523a46da697" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851713 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="222b53b1-7e5c-49d5-9795-fec4d0547398" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851722 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851730 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d919d0a-f152-43da-aec3-080812c0d2d6" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851745 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="954fe7f9-e138-49ab-ab8e-504b75914100" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851754 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e5a97-fb81-4609-8538-c55a8e2db411" containerName="installer" Mar 12 21:09:32.852055 master-0 kubenswrapper[31456]: I0312 21:09:32.851770 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87b7a20-047e-4521-996c-9b11d81e9bd0" containerName="assisted-installer-controller" Mar 12 21:09:32.852618 master-0 kubenswrapper[31456]: I0312 21:09:32.852187 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:32.854067 master-0 kubenswrapper[31456]: I0312 21:09:32.854037 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 21:09:32.855113 master-0 kubenswrapper[31456]: I0312 21:09:32.855082 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 21:09:32.855380 master-0 kubenswrapper[31456]: I0312 21:09:32.855306 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 21:09:32.855380 master-0 kubenswrapper[31456]: I0312 21:09:32.855347 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 21:09:32.855739 master-0 kubenswrapper[31456]: I0312 21:09:32.855587 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-6gf9b" Mar 12 21:09:32.855739 master-0 kubenswrapper[31456]: I0312 21:09:32.855616 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 21:09:32.863136 master-0 kubenswrapper[31456]: I0312 21:09:32.863064 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-2lj8z"] Mar 12 21:09:32.901939 master-0 kubenswrapper[31456]: I0312 21:09:32.901873 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:32.902149 master-0 kubenswrapper[31456]: I0312 21:09:32.902066 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41520992-0499-4a93-bd1c-7814ffb84164-serving-cert\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:32.902149 master-0 kubenswrapper[31456]: I0312 21:09:32.902129 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-config\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:32.902262 master-0 kubenswrapper[31456]: I0312 21:09:32.902226 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfdn\" (UniqueName: \"kubernetes.io/projected/41520992-0499-4a93-bd1c-7814ffb84164-kube-api-access-ssfdn\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.003957 master-0 kubenswrapper[31456]: I0312 21:09:33.003835 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.003957 master-0 kubenswrapper[31456]: I0312 21:09:33.003888 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41520992-0499-4a93-bd1c-7814ffb84164-serving-cert\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.003957 master-0 kubenswrapper[31456]: I0312 21:09:33.003910 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-config\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.003957 master-0 kubenswrapper[31456]: I0312 21:09:33.003940 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfdn\" (UniqueName: \"kubernetes.io/projected/41520992-0499-4a93-bd1c-7814ffb84164-kube-api-access-ssfdn\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.004306 master-0 kubenswrapper[31456]: E0312 21:09:33.004281 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:33.5042655 +0000 UTC m=+34.578870828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:09:33.004818 master-0 kubenswrapper[31456]: I0312 21:09:33.004773 31456 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 21:09:33.005396 master-0 kubenswrapper[31456]: I0312 21:09:33.005347 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-config\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.008287 master-0 kubenswrapper[31456]: I0312 21:09:33.008235 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41520992-0499-4a93-bd1c-7814ffb84164-serving-cert\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.019908 master-0 kubenswrapper[31456]: I0312 21:09:33.019877 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfdn\" (UniqueName: \"kubernetes.io/projected/41520992-0499-4a93-bd1c-7814ffb84164-kube-api-access-ssfdn\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.107908 master-0 kubenswrapper[31456]: I0312 21:09:33.107861 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:33.107908 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:33.107908 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:33.107908 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:33.108297 master-0 kubenswrapper[31456]: I0312 21:09:33.107913 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:33.176702 master-0 kubenswrapper[31456]: I0312 21:09:33.176636 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 12 21:09:33.177184 master-0 kubenswrapper[31456]: I0312 21:09:33.176901 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 12 21:09:33.191556 master-0 kubenswrapper[31456]: I0312 21:09:33.191480 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:09:33.191556 master-0 kubenswrapper[31456]: I0312 21:09:33.191548 31456 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c5cd1163-c6f3-429f-928d-63c66680eaf4" Mar 12 21:09:33.194015 master-0 kubenswrapper[31456]: I0312 21:09:33.193963 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:09:33.194089 master-0 kubenswrapper[31456]: I0312 21:09:33.194014 31456 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c5cd1163-c6f3-429f-928d-63c66680eaf4" Mar 12 21:09:33.510215 master-0 kubenswrapper[31456]: I0312 21:09:33.510134 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:33.510498 master-0 kubenswrapper[31456]: E0312 21:09:33.510305 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:34.510283458 +0000 UTC m=+35.584888786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:09:34.108087 master-0 kubenswrapper[31456]: I0312 21:09:34.108015 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:34.108087 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:34.108087 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:34.108087 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:34.108427 master-0 kubenswrapper[31456]: I0312 21:09:34.108119 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:34.519250 master-0 kubenswrapper[31456]: I0312 21:09:34.519057 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:34.526829 master-0 kubenswrapper[31456]: I0312 21:09:34.524707 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:09:34.534496 master-0 kubenswrapper[31456]: I0312 21:09:34.532056 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:34.534496 master-0 kubenswrapper[31456]: E0312 21:09:34.532327 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:36.532298758 +0000 UTC m=+37.606904126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:09:35.108061 master-0 kubenswrapper[31456]: I0312 21:09:35.107954 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:35.108061 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:35.108061 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:35.108061 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:35.108061 master-0 kubenswrapper[31456]: I0312 21:09:35.108048 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:35.494056 master-0 kubenswrapper[31456]: I0312 21:09:35.493931 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-788db95db5-ddgsw"] Mar 12 21:09:35.495226 master-0 kubenswrapper[31456]: I0312 21:09:35.495192 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:35.497409 master-0 kubenswrapper[31456]: I0312 21:09:35.497378 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-f2k7z" Mar 12 21:09:35.497972 master-0 kubenswrapper[31456]: I0312 21:09:35.497935 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 21:09:35.507652 master-0 kubenswrapper[31456]: I0312 21:09:35.507603 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-788db95db5-ddgsw"] Mar 12 21:09:35.549649 master-0 kubenswrapper[31456]: I0312 21:09:35.549569 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/993d5533-deab-487a-b877-c1f82ac5e0d6-monitoring-plugin-cert\") pod \"monitoring-plugin-788db95db5-ddgsw\" (UID: \"993d5533-deab-487a-b877-c1f82ac5e0d6\") " pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:35.651231 master-0 kubenswrapper[31456]: I0312 21:09:35.651150 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/993d5533-deab-487a-b877-c1f82ac5e0d6-monitoring-plugin-cert\") pod \"monitoring-plugin-788db95db5-ddgsw\" (UID: \"993d5533-deab-487a-b877-c1f82ac5e0d6\") " pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:35.655220 master-0 kubenswrapper[31456]: I0312 21:09:35.655154 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/993d5533-deab-487a-b877-c1f82ac5e0d6-monitoring-plugin-cert\") pod \"monitoring-plugin-788db95db5-ddgsw\" (UID: \"993d5533-deab-487a-b877-c1f82ac5e0d6\") " pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:35.828024 master-0 kubenswrapper[31456]: I0312 21:09:35.827947 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:36.108474 master-0 kubenswrapper[31456]: I0312 21:09:36.108315 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:36.108474 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:36.108474 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:36.108474 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:36.108474 master-0 kubenswrapper[31456]: I0312 21:09:36.108392 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:36.366232 master-0 kubenswrapper[31456]: I0312 21:09:36.366117 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-788db95db5-ddgsw"] Mar 12 21:09:36.376966 master-0 kubenswrapper[31456]: W0312 21:09:36.376662 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod993d5533_deab_487a_b877_c1f82ac5e0d6.slice/crio-02ae0ac5af8ca6d01f9641b17dbcb90680daf7e901de72d92896bd95d25165e7 WatchSource:0}: Error finding container 02ae0ac5af8ca6d01f9641b17dbcb90680daf7e901de72d92896bd95d25165e7: Status 404 returned error can't find the container with id 02ae0ac5af8ca6d01f9641b17dbcb90680daf7e901de72d92896bd95d25165e7 Mar 12 21:09:36.379311 master-0 kubenswrapper[31456]: I0312 21:09:36.379260 31456 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 21:09:36.565694 master-0 kubenswrapper[31456]: I0312 21:09:36.565602 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:36.566417 master-0 kubenswrapper[31456]: E0312 21:09:36.565928 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:40.565897695 +0000 UTC m=+41.640503043 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:09:36.736889 master-0 kubenswrapper[31456]: I0312 21:09:36.736738 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" event={"ID":"993d5533-deab-487a-b877-c1f82ac5e0d6","Type":"ContainerStarted","Data":"02ae0ac5af8ca6d01f9641b17dbcb90680daf7e901de72d92896bd95d25165e7"} Mar 12 21:09:37.111389 master-0 kubenswrapper[31456]: I0312 21:09:37.111020 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:37.111389 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:37.111389 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:37.111389 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:37.111389 master-0 kubenswrapper[31456]: I0312 21:09:37.111096 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:38.109606 master-0 kubenswrapper[31456]: I0312 21:09:38.109446 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:38.109606 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:38.109606 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:38.109606 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:38.109606 master-0 kubenswrapper[31456]: I0312 21:09:38.109542 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:38.754484 master-0 kubenswrapper[31456]: I0312 21:09:38.754410 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" event={"ID":"993d5533-deab-487a-b877-c1f82ac5e0d6","Type":"ContainerStarted","Data":"90103e4873f75360b5dab75819dd45ad1bd2ec3fd1ceab0cb2aa24513cfa1432"} Mar 12 21:09:38.755388 master-0 kubenswrapper[31456]: I0312 21:09:38.755315 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:38.764446 master-0 kubenswrapper[31456]: I0312 21:09:38.764349 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" Mar 12 21:09:38.783454 master-0 kubenswrapper[31456]: I0312 21:09:38.782913 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-788db95db5-ddgsw" podStartSLOduration=2.315719655 podStartE2EDuration="3.782887794s" podCreationTimestamp="2026-03-12 21:09:35 +0000 UTC" firstStartedPulling="2026-03-12 21:09:36.379142703 +0000 UTC m=+37.453748071" lastFinishedPulling="2026-03-12 21:09:37.846310882 +0000 UTC m=+38.920916210" observedRunningTime="2026-03-12 21:09:38.778303863 +0000 UTC m=+39.852909251" watchObservedRunningTime="2026-03-12 21:09:38.782887794 +0000 UTC m=+39.857493162" Mar 12 21:09:39.108725 master-0 kubenswrapper[31456]: I0312 21:09:39.108665 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:39.108725 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:39.108725 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:39.108725 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:39.109053 master-0 kubenswrapper[31456]: I0312 21:09:39.108755 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:40.107618 master-0 kubenswrapper[31456]: I0312 21:09:40.107568 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:40.107618 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:40.107618 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:40.107618 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:40.108183 master-0 kubenswrapper[31456]: I0312 21:09:40.107646 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:40.862985 master-0 kubenswrapper[31456]: I0312 21:09:40.628101 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:40.862985 master-0 kubenswrapper[31456]: E0312 21:09:40.628297 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:09:48.628268103 +0000 UTC m=+49.702873441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:09:41.107686 master-0 kubenswrapper[31456]: I0312 21:09:41.107629 31456 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-hsv57 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 21:09:41.107686 master-0 kubenswrapper[31456]: [-]has-synced failed: reason withheld Mar 12 21:09:41.107686 master-0 kubenswrapper[31456]: [+]process-running ok Mar 12 21:09:41.107686 master-0 kubenswrapper[31456]: healthz check failed Mar 12 21:09:41.108240 master-0 kubenswrapper[31456]: I0312 21:09:41.107722 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" podUID="a3828a1d-8180-4c7b-b423-4488f7fc0b76" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 21:09:41.540527 master-0 kubenswrapper[31456]: I0312 21:09:41.540466 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:09:41.540750 master-0 kubenswrapper[31456]: E0312 21:09:41.540635 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:41.540750 master-0 kubenswrapper[31456]: E0312 21:09:41.540658 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:41.540750 master-0 kubenswrapper[31456]: E0312 21:09:41.540709 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:10:13.540695818 +0000 UTC m=+74.615301146 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:09:42.108565 master-0 kubenswrapper[31456]: I0312 21:09:42.108499 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:42.110582 master-0 kubenswrapper[31456]: I0312 21:09:42.110548 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-hsv57" Mar 12 21:09:46.197500 master-0 kubenswrapper[31456]: I0312 21:09:46.197434 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d4996c5bb-r7khh"] Mar 12 21:09:46.198649 master-0 kubenswrapper[31456]: I0312 21:09:46.198611 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.201159 master-0 kubenswrapper[31456]: I0312 21:09:46.200954 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 21:09:46.202137 master-0 kubenswrapper[31456]: I0312 21:09:46.201888 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 21:09:46.202137 master-0 kubenswrapper[31456]: I0312 21:09:46.201946 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 21:09:46.202137 master-0 kubenswrapper[31456]: I0312 21:09:46.202016 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 21:09:46.202137 master-0 kubenswrapper[31456]: I0312 21:09:46.202111 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 21:09:46.202369 master-0 kubenswrapper[31456]: I0312 21:09:46.202291 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 21:09:46.203481 master-0 kubenswrapper[31456]: I0312 21:09:46.203095 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-qthpm" Mar 12 21:09:46.203481 master-0 kubenswrapper[31456]: I0312 21:09:46.203304 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 21:09:46.206934 master-0 kubenswrapper[31456]: I0312 21:09:46.204202 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 21:09:46.206934 master-0 kubenswrapper[31456]: I0312 21:09:46.204597 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 21:09:46.206934 master-0 kubenswrapper[31456]: I0312 21:09:46.205402 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 21:09:46.211197 master-0 kubenswrapper[31456]: I0312 21:09:46.210193 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 21:09:46.222582 master-0 kubenswrapper[31456]: I0312 21:09:46.222406 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 21:09:46.224260 master-0 kubenswrapper[31456]: I0312 21:09:46.224243 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 21:09:46.266747 master-0 kubenswrapper[31456]: I0312 21:09:46.266694 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4996c5bb-r7khh"] Mar 12 21:09:46.310657 master-0 kubenswrapper[31456]: I0312 21:09:46.310583 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-dir\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310657 master-0 kubenswrapper[31456]: I0312 21:09:46.310651 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310912 master-0 kubenswrapper[31456]: I0312 21:09:46.310693 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310912 master-0 kubenswrapper[31456]: I0312 21:09:46.310738 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310912 master-0 kubenswrapper[31456]: I0312 21:09:46.310774 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310912 master-0 kubenswrapper[31456]: I0312 21:09:46.310795 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310912 master-0 kubenswrapper[31456]: I0312 21:09:46.310865 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jnb\" (UniqueName: \"kubernetes.io/projected/31d37449-37cc-4fa5-9d69-1c695cd8296f-kube-api-access-m4jnb\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.310912 master-0 kubenswrapper[31456]: I0312 21:09:46.310906 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.311091 master-0 kubenswrapper[31456]: I0312 21:09:46.310944 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.311091 master-0 kubenswrapper[31456]: I0312 21:09:46.310987 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-policies\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.311091 master-0 kubenswrapper[31456]: I0312 21:09:46.311011 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.311091 master-0 kubenswrapper[31456]: I0312 21:09:46.311036 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-session\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.311091 master-0 kubenswrapper[31456]: I0312 21:09:46.311061 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.412382 master-0 kubenswrapper[31456]: I0312 21:09:46.412338 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-policies\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.412643 master-0 kubenswrapper[31456]: I0312 21:09:46.412624 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.412832 master-0 kubenswrapper[31456]: I0312 21:09:46.412795 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-session\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.412949 master-0 kubenswrapper[31456]: I0312 21:09:46.412930 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413081 master-0 kubenswrapper[31456]: I0312 21:09:46.413063 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-dir\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413212 master-0 kubenswrapper[31456]: I0312 21:09:46.413187 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-dir\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413286 master-0 kubenswrapper[31456]: I0312 21:09:46.413238 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413286 master-0 kubenswrapper[31456]: I0312 21:09:46.413275 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413367 master-0 kubenswrapper[31456]: I0312 21:09:46.413311 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413367 master-0 kubenswrapper[31456]: I0312 21:09:46.413328 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-policies\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413451 master-0 kubenswrapper[31456]: I0312 21:09:46.413340 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413451 master-0 kubenswrapper[31456]: I0312 21:09:46.413422 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413451 master-0 kubenswrapper[31456]: I0312 21:09:46.413447 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jnb\" (UniqueName: \"kubernetes.io/projected/31d37449-37cc-4fa5-9d69-1c695cd8296f-kube-api-access-m4jnb\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413573 master-0 kubenswrapper[31456]: I0312 21:09:46.413459 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413784 master-0 kubenswrapper[31456]: I0312 21:09:46.413716 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.413873 master-0 kubenswrapper[31456]: I0312 21:09:46.413822 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.415833 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.416245 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.416537 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.416608 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.416830 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-session\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.417147 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.418031 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.421327 master-0 kubenswrapper[31456]: I0312 21:09:46.420151 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.428215 master-0 kubenswrapper[31456]: I0312 21:09:46.428162 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.438231 master-0 kubenswrapper[31456]: I0312 21:09:46.438081 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jnb\" (UniqueName: \"kubernetes.io/projected/31d37449-37cc-4fa5-9d69-1c695cd8296f-kube-api-access-m4jnb\") pod \"oauth-openshift-6d4996c5bb-r7khh\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:46.529786 master-0 kubenswrapper[31456]: I0312 21:09:46.529630 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:47.048258 master-0 kubenswrapper[31456]: I0312 21:09:47.047970 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4996c5bb-r7khh"] Mar 12 21:09:47.815481 master-0 kubenswrapper[31456]: I0312 21:09:47.815418 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" event={"ID":"31d37449-37cc-4fa5-9d69-1c695cd8296f","Type":"ContainerStarted","Data":"3e45da1d029115a8c7f50b5442b4d5055fbba31f992cde7edad89461ab729749"} Mar 12 21:09:48.654836 master-0 kubenswrapper[31456]: I0312 21:09:48.654667 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:09:48.655086 master-0 kubenswrapper[31456]: E0312 21:09:48.654923 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:10:04.65490111 +0000 UTC m=+65.729506438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:09:49.829013 master-0 kubenswrapper[31456]: I0312 21:09:49.828688 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" event={"ID":"31d37449-37cc-4fa5-9d69-1c695cd8296f","Type":"ContainerStarted","Data":"9f197affb0ca11b08404a236de4785192662391d22e536ba4bf397ced57de539"} Mar 12 21:09:49.830029 master-0 kubenswrapper[31456]: I0312 21:09:49.830003 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:49.857016 master-0 kubenswrapper[31456]: I0312 21:09:49.856957 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:09:49.861405 master-0 kubenswrapper[31456]: I0312 21:09:49.861321 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" podStartSLOduration=1.8100545590000001 podStartE2EDuration="3.861307146s" podCreationTimestamp="2026-03-12 21:09:46 +0000 UTC" firstStartedPulling="2026-03-12 21:09:47.060361703 +0000 UTC m=+48.134967061" lastFinishedPulling="2026-03-12 21:09:49.11161432 +0000 UTC m=+50.186219648" observedRunningTime="2026-03-12 21:09:49.858703612 +0000 UTC m=+50.933308980" watchObservedRunningTime="2026-03-12 21:09:49.861307146 +0000 UTC m=+50.935912474" Mar 12 21:09:51.808547 master-0 kubenswrapper[31456]: I0312 21:09:51.808505 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6d4996c5bb-r7khh"] Mar 12 21:10:04.717578 master-0 kubenswrapper[31456]: I0312 21:10:04.717509 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:10:04.718252 master-0 kubenswrapper[31456]: E0312 21:10:04.717730 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:10:36.717700797 +0000 UTC m=+97.792306155 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:10:13.584999 master-0 kubenswrapper[31456]: I0312 21:10:13.584870 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:10:13.586081 master-0 kubenswrapper[31456]: E0312 21:10:13.585170 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:10:13.586081 master-0 kubenswrapper[31456]: E0312 21:10:13.585225 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:10:13.586081 master-0 kubenswrapper[31456]: E0312 21:10:13.585319 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:11:17.58529238 +0000 UTC m=+138.659897738 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:10:16.330578 master-0 kubenswrapper[31456]: I0312 21:10:16.330487 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 21:10:16.332770 master-0 kubenswrapper[31456]: I0312 21:10:16.332737 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.339838 master-0 kubenswrapper[31456]: I0312 21:10:16.336973 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-v74cb" Mar 12 21:10:16.339838 master-0 kubenswrapper[31456]: I0312 21:10:16.339523 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 21:10:16.350886 master-0 kubenswrapper[31456]: I0312 21:10:16.350785 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 21:10:16.437934 master-0 kubenswrapper[31456]: I0312 21:10:16.437841 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-var-lock\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.438229 master-0 kubenswrapper[31456]: I0312 21:10:16.437948 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.438229 master-0 kubenswrapper[31456]: I0312 21:10:16.438028 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.540129 master-0 kubenswrapper[31456]: I0312 21:10:16.540061 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-var-lock\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.540427 master-0 kubenswrapper[31456]: I0312 21:10:16.540151 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.540427 master-0 kubenswrapper[31456]: I0312 21:10:16.540223 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.540427 master-0 kubenswrapper[31456]: I0312 21:10:16.540263 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-var-lock\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.540427 master-0 kubenswrapper[31456]: I0312 21:10:16.540342 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.570148 master-0 kubenswrapper[31456]: I0312 21:10:16.570088 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:16.675914 master-0 kubenswrapper[31456]: I0312 21:10:16.675694 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:10:17.198083 master-0 kubenswrapper[31456]: I0312 21:10:17.197878 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 12 21:10:17.886872 master-0 kubenswrapper[31456]: I0312 21:10:17.886562 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" podUID="31d37449-37cc-4fa5-9d69-1c695cd8296f" containerName="oauth-openshift" containerID="cri-o://9f197affb0ca11b08404a236de4785192662391d22e536ba4bf397ced57de539" gracePeriod=15 Mar 12 21:10:18.062427 master-0 kubenswrapper[31456]: I0312 21:10:18.062338 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f2acf6cf-3f66-48a3-b424-0ecdcfc21146","Type":"ContainerStarted","Data":"9e1af043aa12da3cbcaf60b93ff0933d2f01ed7323a32f1d50d891b766078ce1"} Mar 12 21:10:18.062427 master-0 kubenswrapper[31456]: I0312 21:10:18.062428 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f2acf6cf-3f66-48a3-b424-0ecdcfc21146","Type":"ContainerStarted","Data":"a2cc745482d73b22f7fdc95f60a16c9ce4612d3863485f6ea45b13b9fb9c3930"} Mar 12 21:10:18.064631 master-0 kubenswrapper[31456]: I0312 21:10:18.064391 31456 generic.go:334] "Generic (PLEG): container finished" podID="31d37449-37cc-4fa5-9d69-1c695cd8296f" containerID="9f197affb0ca11b08404a236de4785192662391d22e536ba4bf397ced57de539" exitCode=0 Mar 12 21:10:18.064631 master-0 kubenswrapper[31456]: I0312 21:10:18.064425 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" event={"ID":"31d37449-37cc-4fa5-9d69-1c695cd8296f","Type":"ContainerDied","Data":"9f197affb0ca11b08404a236de4785192662391d22e536ba4bf397ced57de539"} Mar 12 21:10:18.096877 master-0 kubenswrapper[31456]: I0312 21:10:18.094504 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.094476106 podStartE2EDuration="2.094476106s" podCreationTimestamp="2026-03-12 21:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:10:18.085879617 +0000 UTC m=+79.160484995" watchObservedRunningTime="2026-03-12 21:10:18.094476106 +0000 UTC m=+79.169081474" Mar 12 21:10:18.495364 master-0 kubenswrapper[31456]: I0312 21:10:18.495290 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:10:18.572257 master-0 kubenswrapper[31456]: I0312 21:10:18.572191 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c"] Mar 12 21:10:18.572697 master-0 kubenswrapper[31456]: E0312 21:10:18.572670 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d37449-37cc-4fa5-9d69-1c695cd8296f" containerName="oauth-openshift" Mar 12 21:10:18.572697 master-0 kubenswrapper[31456]: I0312 21:10:18.572695 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d37449-37cc-4fa5-9d69-1c695cd8296f" containerName="oauth-openshift" Mar 12 21:10:18.572969 master-0 kubenswrapper[31456]: I0312 21:10:18.572948 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d37449-37cc-4fa5-9d69-1c695cd8296f" containerName="oauth-openshift" Mar 12 21:10:18.573453 master-0 kubenswrapper[31456]: I0312 21:10:18.573431 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.578359 master-0 kubenswrapper[31456]: I0312 21:10:18.578314 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-policies\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578475 master-0 kubenswrapper[31456]: I0312 21:10:18.578390 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-serving-cert\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578475 master-0 kubenswrapper[31456]: I0312 21:10:18.578416 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-error\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578475 master-0 kubenswrapper[31456]: I0312 21:10:18.578434 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jnb\" (UniqueName: \"kubernetes.io/projected/31d37449-37cc-4fa5-9d69-1c695cd8296f-kube-api-access-m4jnb\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578475 master-0 kubenswrapper[31456]: I0312 21:10:18.578462 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-router-certs\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578499 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-dir\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578521 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-provider-selection\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578537 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-trusted-ca-bundle\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578568 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-session\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578591 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-login\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578608 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-service-ca\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578633 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-cliconfig\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.578696 master-0 kubenswrapper[31456]: I0312 21:10:18.578675 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-ocp-branding-template\") pod \"31d37449-37cc-4fa5-9d69-1c695cd8296f\" (UID: \"31d37449-37cc-4fa5-9d69-1c695cd8296f\") " Mar 12 21:10:18.579128 master-0 kubenswrapper[31456]: I0312 21:10:18.578864 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:18.579128 master-0 kubenswrapper[31456]: I0312 21:10:18.578987 31456 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.579507 master-0 kubenswrapper[31456]: I0312 21:10:18.579457 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:18.580190 master-0 kubenswrapper[31456]: I0312 21:10:18.580136 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:18.581348 master-0 kubenswrapper[31456]: I0312 21:10:18.580651 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:10:18.581348 master-0 kubenswrapper[31456]: I0312 21:10:18.580698 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:18.581544 master-0 kubenswrapper[31456]: I0312 21:10:18.581496 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.582413 master-0 kubenswrapper[31456]: I0312 21:10:18.582379 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.583108 master-0 kubenswrapper[31456]: I0312 21:10:18.583052 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d37449-37cc-4fa5-9d69-1c695cd8296f-kube-api-access-m4jnb" (OuterVolumeSpecName: "kube-api-access-m4jnb") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "kube-api-access-m4jnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:10:18.583612 master-0 kubenswrapper[31456]: I0312 21:10:18.583504 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.583612 master-0 kubenswrapper[31456]: I0312 21:10:18.583553 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.583852 master-0 kubenswrapper[31456]: I0312 21:10:18.583790 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.584126 master-0 kubenswrapper[31456]: I0312 21:10:18.584090 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.585098 master-0 kubenswrapper[31456]: I0312 21:10:18.585044 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "31d37449-37cc-4fa5-9d69-1c695cd8296f" (UID: "31d37449-37cc-4fa5-9d69-1c695cd8296f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:18.624401 master-0 kubenswrapper[31456]: I0312 21:10:18.624350 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c"] Mar 12 21:10:18.680201 master-0 kubenswrapper[31456]: I0312 21:10:18.680134 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680201 master-0 kubenswrapper[31456]: I0312 21:10:18.680179 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-error\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680238 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-policies\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680269 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680291 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680312 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-login\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680337 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnr4t\" (UniqueName: \"kubernetes.io/projected/739ac366-cbaa-4b39-a525-66c54c3802f0-kube-api-access-rnr4t\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680365 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680386 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-session\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680404 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680421 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680442 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-dir\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.680455 master-0 kubenswrapper[31456]: I0312 21:10:18.680458 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680503 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680518 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680621 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680633 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jnb\" (UniqueName: \"kubernetes.io/projected/31d37449-37cc-4fa5-9d69-1c695cd8296f-kube-api-access-m4jnb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680642 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680654 31456 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31d37449-37cc-4fa5-9d69-1c695cd8296f-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680664 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680675 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680684 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680695 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680705 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.681081 master-0 kubenswrapper[31456]: I0312 21:10:18.680714 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31d37449-37cc-4fa5-9d69-1c695cd8296f-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:18.781621 master-0 kubenswrapper[31456]: I0312 21:10:18.781470 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781621 master-0 kubenswrapper[31456]: I0312 21:10:18.781541 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-error\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781623 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-policies\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781667 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781695 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781723 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-login\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781749 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnr4t\" (UniqueName: \"kubernetes.io/projected/739ac366-cbaa-4b39-a525-66c54c3802f0-kube-api-access-rnr4t\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781780 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781825 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-session\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781857 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.781885 master-0 kubenswrapper[31456]: I0312 21:10:18.781882 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.782267 master-0 kubenswrapper[31456]: I0312 21:10:18.781913 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-dir\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.782267 master-0 kubenswrapper[31456]: I0312 21:10:18.781937 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.783739 master-0 kubenswrapper[31456]: I0312 21:10:18.783685 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.784646 master-0 kubenswrapper[31456]: I0312 21:10:18.784601 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.784706 master-0 kubenswrapper[31456]: I0312 21:10:18.784664 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.784839 master-0 kubenswrapper[31456]: I0312 21:10:18.784800 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-policies\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.785062 master-0 kubenswrapper[31456]: I0312 21:10:18.785004 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-dir\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.788097 master-0 kubenswrapper[31456]: I0312 21:10:18.788026 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.788569 master-0 kubenswrapper[31456]: I0312 21:10:18.788518 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-login\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.788743 master-0 kubenswrapper[31456]: I0312 21:10:18.788717 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.791976 master-0 kubenswrapper[31456]: I0312 21:10:18.789622 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-session\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.791976 master-0 kubenswrapper[31456]: I0312 21:10:18.790112 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.791976 master-0 kubenswrapper[31456]: I0312 21:10:18.790612 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.792411 master-0 kubenswrapper[31456]: I0312 21:10:18.792257 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-error\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.811353 master-0 kubenswrapper[31456]: I0312 21:10:18.811272 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnr4t\" (UniqueName: \"kubernetes.io/projected/739ac366-cbaa-4b39-a525-66c54c3802f0-kube-api-access-rnr4t\") pod \"oauth-openshift-6ff7cb97b6-qjc7c\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:18.928584 master-0 kubenswrapper[31456]: I0312 21:10:18.928508 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:19.075973 master-0 kubenswrapper[31456]: I0312 21:10:19.075075 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" Mar 12 21:10:19.075973 master-0 kubenswrapper[31456]: I0312 21:10:19.075914 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4996c5bb-r7khh" event={"ID":"31d37449-37cc-4fa5-9d69-1c695cd8296f","Type":"ContainerDied","Data":"3e45da1d029115a8c7f50b5442b4d5055fbba31f992cde7edad89461ab729749"} Mar 12 21:10:19.075973 master-0 kubenswrapper[31456]: I0312 21:10:19.075989 31456 scope.go:117] "RemoveContainer" containerID="9f197affb0ca11b08404a236de4785192662391d22e536ba4bf397ced57de539" Mar 12 21:10:19.109977 master-0 kubenswrapper[31456]: I0312 21:10:19.109938 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6d4996c5bb-r7khh"] Mar 12 21:10:19.130313 master-0 kubenswrapper[31456]: I0312 21:10:19.130254 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6d4996c5bb-r7khh"] Mar 12 21:10:19.179346 master-0 kubenswrapper[31456]: I0312 21:10:19.179285 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d37449-37cc-4fa5-9d69-1c695cd8296f" path="/var/lib/kubelet/pods/31d37449-37cc-4fa5-9d69-1c695cd8296f/volumes" Mar 12 21:10:19.420936 master-0 kubenswrapper[31456]: I0312 21:10:19.419439 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c"] Mar 12 21:10:19.429625 master-0 kubenswrapper[31456]: W0312 21:10:19.429565 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739ac366_cbaa_4b39_a525_66c54c3802f0.slice/crio-e87ef76b4e75a491ec9197f16aee1cbb14aca6be6347f9170f4efa30a562b5cb WatchSource:0}: Error finding container e87ef76b4e75a491ec9197f16aee1cbb14aca6be6347f9170f4efa30a562b5cb: Status 404 returned error can't find the container with id e87ef76b4e75a491ec9197f16aee1cbb14aca6be6347f9170f4efa30a562b5cb Mar 12 21:10:20.086554 master-0 kubenswrapper[31456]: I0312 21:10:20.086447 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" event={"ID":"739ac366-cbaa-4b39-a525-66c54c3802f0","Type":"ContainerStarted","Data":"a7dbff18322dcdecfea58aaa7e321fa66b989f291e83524de7729657bb7e5cfa"} Mar 12 21:10:20.086554 master-0 kubenswrapper[31456]: I0312 21:10:20.086542 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" event={"ID":"739ac366-cbaa-4b39-a525-66c54c3802f0","Type":"ContainerStarted","Data":"e87ef76b4e75a491ec9197f16aee1cbb14aca6be6347f9170f4efa30a562b5cb"} Mar 12 21:10:20.087754 master-0 kubenswrapper[31456]: I0312 21:10:20.087671 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:20.125308 master-0 kubenswrapper[31456]: I0312 21:10:20.125174 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" podStartSLOduration=29.125138362 podStartE2EDuration="29.125138362s" podCreationTimestamp="2026-03-12 21:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:10:20.12465537 +0000 UTC m=+81.199260708" watchObservedRunningTime="2026-03-12 21:10:20.125138362 +0000 UTC m=+81.199743730" Mar 12 21:10:20.429121 master-0 kubenswrapper[31456]: I0312 21:10:20.429005 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:10:32.027493 master-0 kubenswrapper[31456]: I0312 21:10:32.027417 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-759579d7c9-wjl25"] Mar 12 21:10:32.028297 master-0 kubenswrapper[31456]: I0312 21:10:32.027727 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" containerID="cri-o://03d26921cb309140d5aa931f200e060cdbfc92a85420edf8e1d33e12c678c87b" gracePeriod=30 Mar 12 21:10:32.056642 master-0 kubenswrapper[31456]: I0312 21:10:32.056580 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg"] Mar 12 21:10:32.056892 master-0 kubenswrapper[31456]: I0312 21:10:32.056832 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" podUID="d850d441-7505-4e81-b4cf-6e7a9911ae35" containerName="route-controller-manager" containerID="cri-o://2c63b31786f77f93d95548b76a3537893d50bf158aa9c3612aab7c5b5e4a29b8" gracePeriod=30 Mar 12 21:10:32.210602 master-0 kubenswrapper[31456]: I0312 21:10:32.210541 31456 generic.go:334] "Generic (PLEG): container finished" podID="b50a6106-1112-4a4b-b4ae-933879e12110" containerID="03d26921cb309140d5aa931f200e060cdbfc92a85420edf8e1d33e12c678c87b" exitCode=0 Mar 12 21:10:32.210801 master-0 kubenswrapper[31456]: I0312 21:10:32.210633 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" event={"ID":"b50a6106-1112-4a4b-b4ae-933879e12110","Type":"ContainerDied","Data":"03d26921cb309140d5aa931f200e060cdbfc92a85420edf8e1d33e12c678c87b"} Mar 12 21:10:32.210801 master-0 kubenswrapper[31456]: I0312 21:10:32.210675 31456 scope.go:117] "RemoveContainer" containerID="8dc00850a2298439a85382d76a3ffd123f490ec7c79324ad9a9c72fd9448c30b" Mar 12 21:10:32.212830 master-0 kubenswrapper[31456]: I0312 21:10:32.212787 31456 generic.go:334] "Generic (PLEG): container finished" podID="d850d441-7505-4e81-b4cf-6e7a9911ae35" containerID="2c63b31786f77f93d95548b76a3537893d50bf158aa9c3612aab7c5b5e4a29b8" exitCode=0 Mar 12 21:10:32.212871 master-0 kubenswrapper[31456]: I0312 21:10:32.212846 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" event={"ID":"d850d441-7505-4e81-b4cf-6e7a9911ae35","Type":"ContainerDied","Data":"2c63b31786f77f93d95548b76a3537893d50bf158aa9c3612aab7c5b5e4a29b8"} Mar 12 21:10:32.724689 master-0 kubenswrapper[31456]: I0312 21:10:32.724641 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:10:32.855232 master-0 kubenswrapper[31456]: I0312 21:10:32.855167 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") pod \"d850d441-7505-4e81-b4cf-6e7a9911ae35\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " Mar 12 21:10:32.855455 master-0 kubenswrapper[31456]: I0312 21:10:32.855274 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") pod \"d850d441-7505-4e81-b4cf-6e7a9911ae35\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " Mar 12 21:10:32.855455 master-0 kubenswrapper[31456]: I0312 21:10:32.855302 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") pod \"d850d441-7505-4e81-b4cf-6e7a9911ae35\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " Mar 12 21:10:32.855455 master-0 kubenswrapper[31456]: I0312 21:10:32.855342 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") pod \"d850d441-7505-4e81-b4cf-6e7a9911ae35\" (UID: \"d850d441-7505-4e81-b4cf-6e7a9911ae35\") " Mar 12 21:10:32.856322 master-0 kubenswrapper[31456]: I0312 21:10:32.856274 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca" (OuterVolumeSpecName: "client-ca") pod "d850d441-7505-4e81-b4cf-6e7a9911ae35" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:32.856633 master-0 kubenswrapper[31456]: I0312 21:10:32.856607 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config" (OuterVolumeSpecName: "config") pod "d850d441-7505-4e81-b4cf-6e7a9911ae35" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:32.859529 master-0 kubenswrapper[31456]: I0312 21:10:32.859463 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7" (OuterVolumeSpecName: "kube-api-access-f2mk7") pod "d850d441-7505-4e81-b4cf-6e7a9911ae35" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35"). InnerVolumeSpecName "kube-api-access-f2mk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:10:32.859992 master-0 kubenswrapper[31456]: I0312 21:10:32.859955 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d850d441-7505-4e81-b4cf-6e7a9911ae35" (UID: "d850d441-7505-4e81-b4cf-6e7a9911ae35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:32.906603 master-0 kubenswrapper[31456]: I0312 21:10:32.906558 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:10:32.957727 master-0 kubenswrapper[31456]: I0312 21:10:32.957596 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:32.957727 master-0 kubenswrapper[31456]: I0312 21:10:32.957653 31456 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d850d441-7505-4e81-b4cf-6e7a9911ae35-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:32.957727 master-0 kubenswrapper[31456]: I0312 21:10:32.957665 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2mk7\" (UniqueName: \"kubernetes.io/projected/d850d441-7505-4e81-b4cf-6e7a9911ae35-kube-api-access-f2mk7\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:32.957727 master-0 kubenswrapper[31456]: I0312 21:10:32.957681 31456 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d850d441-7505-4e81-b4cf-6e7a9911ae35-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:33.058427 master-0 kubenswrapper[31456]: I0312 21:10:33.058333 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") pod \"b50a6106-1112-4a4b-b4ae-933879e12110\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " Mar 12 21:10:33.059205 master-0 kubenswrapper[31456]: I0312 21:10:33.058441 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") pod \"b50a6106-1112-4a4b-b4ae-933879e12110\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " Mar 12 21:10:33.059205 master-0 kubenswrapper[31456]: I0312 21:10:33.058587 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") pod \"b50a6106-1112-4a4b-b4ae-933879e12110\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " Mar 12 21:10:33.059205 master-0 kubenswrapper[31456]: I0312 21:10:33.058639 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") pod \"b50a6106-1112-4a4b-b4ae-933879e12110\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " Mar 12 21:10:33.059205 master-0 kubenswrapper[31456]: I0312 21:10:33.058680 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") pod \"b50a6106-1112-4a4b-b4ae-933879e12110\" (UID: \"b50a6106-1112-4a4b-b4ae-933879e12110\") " Mar 12 21:10:33.059536 master-0 kubenswrapper[31456]: I0312 21:10:33.059184 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config" (OuterVolumeSpecName: "config") pod "b50a6106-1112-4a4b-b4ae-933879e12110" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:33.059536 master-0 kubenswrapper[31456]: I0312 21:10:33.059397 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b50a6106-1112-4a4b-b4ae-933879e12110" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:33.059675 master-0 kubenswrapper[31456]: I0312 21:10:33.059551 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca" (OuterVolumeSpecName: "client-ca") pod "b50a6106-1112-4a4b-b4ae-933879e12110" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:10:33.061095 master-0 kubenswrapper[31456]: I0312 21:10:33.061045 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b50a6106-1112-4a4b-b4ae-933879e12110" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:10:33.063260 master-0 kubenswrapper[31456]: I0312 21:10:33.063189 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq" (OuterVolumeSpecName: "kube-api-access-bcjsq") pod "b50a6106-1112-4a4b-b4ae-933879e12110" (UID: "b50a6106-1112-4a4b-b4ae-933879e12110"). InnerVolumeSpecName "kube-api-access-bcjsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:10:33.160694 master-0 kubenswrapper[31456]: I0312 21:10:33.160662 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcjsq\" (UniqueName: \"kubernetes.io/projected/b50a6106-1112-4a4b-b4ae-933879e12110-kube-api-access-bcjsq\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:33.160942 master-0 kubenswrapper[31456]: I0312 21:10:33.160931 31456 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b50a6106-1112-4a4b-b4ae-933879e12110-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:33.161021 master-0 kubenswrapper[31456]: I0312 21:10:33.161007 31456 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:33.161091 master-0 kubenswrapper[31456]: I0312 21:10:33.161082 31456 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:33.161154 master-0 kubenswrapper[31456]: I0312 21:10:33.161145 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a6106-1112-4a4b-b4ae-933879e12110-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:33.223421 master-0 kubenswrapper[31456]: I0312 21:10:33.223328 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" event={"ID":"d850d441-7505-4e81-b4cf-6e7a9911ae35","Type":"ContainerDied","Data":"b9e3c21b0a8fb441272236b28d851d401b15830eadb4fa9c4634ebc7e46a4354"} Mar 12 21:10:33.223541 master-0 kubenswrapper[31456]: I0312 21:10:33.223430 31456 scope.go:117] "RemoveContainer" containerID="2c63b31786f77f93d95548b76a3537893d50bf158aa9c3612aab7c5b5e4a29b8" Mar 12 21:10:33.223581 master-0 kubenswrapper[31456]: I0312 21:10:33.223426 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg" Mar 12 21:10:33.226393 master-0 kubenswrapper[31456]: I0312 21:10:33.226352 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" event={"ID":"b50a6106-1112-4a4b-b4ae-933879e12110","Type":"ContainerDied","Data":"41cf73b537e290a684ef705b807efabb2227fb4edc604539b559ade7d235fcf5"} Mar 12 21:10:33.226551 master-0 kubenswrapper[31456]: I0312 21:10:33.226425 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-759579d7c9-wjl25" Mar 12 21:10:33.245994 master-0 kubenswrapper[31456]: I0312 21:10:33.245961 31456 scope.go:117] "RemoveContainer" containerID="03d26921cb309140d5aa931f200e060cdbfc92a85420edf8e1d33e12c678c87b" Mar 12 21:10:33.251774 master-0 kubenswrapper[31456]: I0312 21:10:33.251722 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg"] Mar 12 21:10:33.263290 master-0 kubenswrapper[31456]: I0312 21:10:33.263233 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8467b998d8-l9fvg"] Mar 12 21:10:33.278922 master-0 kubenswrapper[31456]: I0312 21:10:33.278887 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-759579d7c9-wjl25"] Mar 12 21:10:33.282347 master-0 kubenswrapper[31456]: I0312 21:10:33.282292 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-759579d7c9-wjl25"] Mar 12 21:10:33.863653 master-0 kubenswrapper[31456]: I0312 21:10:33.863567 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg"] Mar 12 21:10:33.864080 master-0 kubenswrapper[31456]: E0312 21:10:33.864045 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d850d441-7505-4e81-b4cf-6e7a9911ae35" containerName="route-controller-manager" Mar 12 21:10:33.864080 master-0 kubenswrapper[31456]: I0312 21:10:33.864074 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d850d441-7505-4e81-b4cf-6e7a9911ae35" containerName="route-controller-manager" Mar 12 21:10:33.864249 master-0 kubenswrapper[31456]: E0312 21:10:33.864101 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" Mar 12 21:10:33.864249 master-0 kubenswrapper[31456]: I0312 21:10:33.864115 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" Mar 12 21:10:33.864249 master-0 kubenswrapper[31456]: E0312 21:10:33.864150 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" Mar 12 21:10:33.864249 master-0 kubenswrapper[31456]: I0312 21:10:33.864164 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" Mar 12 21:10:33.864479 master-0 kubenswrapper[31456]: I0312 21:10:33.864371 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" Mar 12 21:10:33.864479 master-0 kubenswrapper[31456]: I0312 21:10:33.864414 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" containerName="controller-manager" Mar 12 21:10:33.864479 master-0 kubenswrapper[31456]: I0312 21:10:33.864453 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d850d441-7505-4e81-b4cf-6e7a9911ae35" containerName="route-controller-manager" Mar 12 21:10:33.865204 master-0 kubenswrapper[31456]: I0312 21:10:33.865162 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:33.866976 master-0 kubenswrapper[31456]: I0312 21:10:33.866908 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75"] Mar 12 21:10:33.868501 master-0 kubenswrapper[31456]: I0312 21:10:33.868431 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:33.874361 master-0 kubenswrapper[31456]: I0312 21:10:33.874282 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 21:10:33.874637 master-0 kubenswrapper[31456]: I0312 21:10:33.874568 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 21:10:33.875141 master-0 kubenswrapper[31456]: I0312 21:10:33.875073 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 21:10:33.875476 master-0 kubenswrapper[31456]: I0312 21:10:33.875433 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 21:10:33.875687 master-0 kubenswrapper[31456]: I0312 21:10:33.875612 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-f29rj" Mar 12 21:10:33.875769 master-0 kubenswrapper[31456]: I0312 21:10:33.875639 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 21:10:33.875769 master-0 kubenswrapper[31456]: I0312 21:10:33.875750 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 21:10:33.876178 master-0 kubenswrapper[31456]: I0312 21:10:33.876117 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 21:10:33.876617 master-0 kubenswrapper[31456]: I0312 21:10:33.876578 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 21:10:33.876891 master-0 kubenswrapper[31456]: I0312 21:10:33.876855 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 21:10:33.877087 master-0 kubenswrapper[31456]: I0312 21:10:33.877046 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-7gthf" Mar 12 21:10:33.877451 master-0 kubenswrapper[31456]: I0312 21:10:33.877409 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 21:10:33.896651 master-0 kubenswrapper[31456]: I0312 21:10:33.896595 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 21:10:33.932275 master-0 kubenswrapper[31456]: I0312 21:10:33.932226 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75"] Mar 12 21:10:33.935414 master-0 kubenswrapper[31456]: I0312 21:10:33.935353 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg"] Mar 12 21:10:33.976711 master-0 kubenswrapper[31456]: I0312 21:10:33.976660 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdx2\" (UniqueName: \"kubernetes.io/projected/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-kube-api-access-rqdx2\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:33.976951 master-0 kubenswrapper[31456]: I0312 21:10:33.976728 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-proxy-ca-bundles\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:33.976951 master-0 kubenswrapper[31456]: I0312 21:10:33.976766 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-serving-cert\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:33.976951 master-0 kubenswrapper[31456]: I0312 21:10:33.976792 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-config\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:33.976951 master-0 kubenswrapper[31456]: I0312 21:10:33.976846 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-config\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:33.976951 master-0 kubenswrapper[31456]: I0312 21:10:33.976888 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zv8c\" (UniqueName: \"kubernetes.io/projected/ea65fd7d-d9be-4b1e-b127-bef18553e713-kube-api-access-8zv8c\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:33.977215 master-0 kubenswrapper[31456]: I0312 21:10:33.977021 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-client-ca\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:33.977215 master-0 kubenswrapper[31456]: I0312 21:10:33.977062 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea65fd7d-d9be-4b1e-b127-bef18553e713-serving-cert\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:33.977215 master-0 kubenswrapper[31456]: I0312 21:10:33.977197 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-client-ca\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.078472 master-0 kubenswrapper[31456]: I0312 21:10:34.078370 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea65fd7d-d9be-4b1e-b127-bef18553e713-serving-cert\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078499 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-client-ca\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078610 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdx2\" (UniqueName: \"kubernetes.io/projected/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-kube-api-access-rqdx2\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078659 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-proxy-ca-bundles\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078699 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-serving-cert\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078735 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-config\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078764 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-config\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078900 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zv8c\" (UniqueName: \"kubernetes.io/projected/ea65fd7d-d9be-4b1e-b127-bef18553e713-kube-api-access-8zv8c\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.079472 master-0 kubenswrapper[31456]: I0312 21:10:34.078991 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-client-ca\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.080906 master-0 kubenswrapper[31456]: I0312 21:10:34.080794 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-client-ca\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.082320 master-0 kubenswrapper[31456]: I0312 21:10:34.082248 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-client-ca\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.083734 master-0 kubenswrapper[31456]: I0312 21:10:34.083060 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-config\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.084332 master-0 kubenswrapper[31456]: I0312 21:10:34.084266 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea65fd7d-d9be-4b1e-b127-bef18553e713-proxy-ca-bundles\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.084440 master-0 kubenswrapper[31456]: I0312 21:10:34.084407 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-config\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.088001 master-0 kubenswrapper[31456]: I0312 21:10:34.087069 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea65fd7d-d9be-4b1e-b127-bef18553e713-serving-cert\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.088986 master-0 kubenswrapper[31456]: I0312 21:10:34.088888 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-serving-cert\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.110195 master-0 kubenswrapper[31456]: I0312 21:10:34.110117 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdx2\" (UniqueName: \"kubernetes.io/projected/ae51077e-9011-4dcf-8a0b-059a27ef8c2f-kube-api-access-rqdx2\") pod \"route-controller-manager-787fd5dbb4-tnk75\" (UID: \"ae51077e-9011-4dcf-8a0b-059a27ef8c2f\") " pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.111660 master-0 kubenswrapper[31456]: I0312 21:10:34.111604 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zv8c\" (UniqueName: \"kubernetes.io/projected/ea65fd7d-d9be-4b1e-b127-bef18553e713-kube-api-access-8zv8c\") pod \"controller-manager-5ffd54cbbd-gzgqg\" (UID: \"ea65fd7d-d9be-4b1e-b127-bef18553e713\") " pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.253010 master-0 kubenswrapper[31456]: I0312 21:10:34.252795 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:34.268414 master-0 kubenswrapper[31456]: I0312 21:10:34.268340 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:34.801177 master-0 kubenswrapper[31456]: I0312 21:10:34.801121 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg"] Mar 12 21:10:34.801997 master-0 kubenswrapper[31456]: W0312 21:10:34.801948 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea65fd7d_d9be_4b1e_b127_bef18553e713.slice/crio-a20dfce66d47a20d35a1835c0de8cdb72009eaf45d3359a3e2f3763037769e9b WatchSource:0}: Error finding container a20dfce66d47a20d35a1835c0de8cdb72009eaf45d3359a3e2f3763037769e9b: Status 404 returned error can't find the container with id a20dfce66d47a20d35a1835c0de8cdb72009eaf45d3359a3e2f3763037769e9b Mar 12 21:10:34.883770 master-0 kubenswrapper[31456]: I0312 21:10:34.883700 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75"] Mar 12 21:10:34.889800 master-0 kubenswrapper[31456]: W0312 21:10:34.889731 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae51077e_9011_4dcf_8a0b_059a27ef8c2f.slice/crio-40001f09f2400f7c74d57cfcd8373585f062c07bcfa54e7839329d37c1fb13de WatchSource:0}: Error finding container 40001f09f2400f7c74d57cfcd8373585f062c07bcfa54e7839329d37c1fb13de: Status 404 returned error can't find the container with id 40001f09f2400f7c74d57cfcd8373585f062c07bcfa54e7839329d37c1fb13de Mar 12 21:10:35.182755 master-0 kubenswrapper[31456]: I0312 21:10:35.182630 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50a6106-1112-4a4b-b4ae-933879e12110" path="/var/lib/kubelet/pods/b50a6106-1112-4a4b-b4ae-933879e12110/volumes" Mar 12 21:10:35.183362 master-0 kubenswrapper[31456]: I0312 21:10:35.183239 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d850d441-7505-4e81-b4cf-6e7a9911ae35" path="/var/lib/kubelet/pods/d850d441-7505-4e81-b4cf-6e7a9911ae35/volumes" Mar 12 21:10:35.251457 master-0 kubenswrapper[31456]: I0312 21:10:35.251388 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" event={"ID":"ea65fd7d-d9be-4b1e-b127-bef18553e713","Type":"ContainerStarted","Data":"4f46c0fb1c077fe952263be96ceeccbb09be419dc41fd06776fec3d7f60f32cd"} Mar 12 21:10:35.251457 master-0 kubenswrapper[31456]: I0312 21:10:35.251457 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" event={"ID":"ea65fd7d-d9be-4b1e-b127-bef18553e713","Type":"ContainerStarted","Data":"a20dfce66d47a20d35a1835c0de8cdb72009eaf45d3359a3e2f3763037769e9b"} Mar 12 21:10:35.251926 master-0 kubenswrapper[31456]: I0312 21:10:35.251888 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:35.253720 master-0 kubenswrapper[31456]: I0312 21:10:35.253685 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" event={"ID":"ae51077e-9011-4dcf-8a0b-059a27ef8c2f","Type":"ContainerStarted","Data":"28d90a00fadf3f658ad028561d29b38bd495b9795ef7fb762f34dc9778b7c291"} Mar 12 21:10:35.253820 master-0 kubenswrapper[31456]: I0312 21:10:35.253721 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" event={"ID":"ae51077e-9011-4dcf-8a0b-059a27ef8c2f","Type":"ContainerStarted","Data":"40001f09f2400f7c74d57cfcd8373585f062c07bcfa54e7839329d37c1fb13de"} Mar 12 21:10:35.254008 master-0 kubenswrapper[31456]: I0312 21:10:35.253965 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:35.267245 master-0 kubenswrapper[31456]: I0312 21:10:35.267184 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" Mar 12 21:10:35.303896 master-0 kubenswrapper[31456]: I0312 21:10:35.292967 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ffd54cbbd-gzgqg" podStartSLOduration=3.292942278 podStartE2EDuration="3.292942278s" podCreationTimestamp="2026-03-12 21:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:10:35.272097021 +0000 UTC m=+96.346702379" watchObservedRunningTime="2026-03-12 21:10:35.292942278 +0000 UTC m=+96.367547616" Mar 12 21:10:35.336437 master-0 kubenswrapper[31456]: I0312 21:10:35.336352 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" podStartSLOduration=3.336333064 podStartE2EDuration="3.336333064s" podCreationTimestamp="2026-03-12 21:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:10:35.31400961 +0000 UTC m=+96.388614938" watchObservedRunningTime="2026-03-12 21:10:35.336333064 +0000 UTC m=+96.410938392" Mar 12 21:10:35.667368 master-0 kubenswrapper[31456]: I0312 21:10:35.667277 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-787fd5dbb4-tnk75" Mar 12 21:10:36.742352 master-0 kubenswrapper[31456]: I0312 21:10:36.742214 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:10:36.743316 master-0 kubenswrapper[31456]: E0312 21:10:36.742545 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:11:40.742503448 +0000 UTC m=+161.817108806 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:10:55.740377 master-0 kubenswrapper[31456]: I0312 21:10:55.740218 31456 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:10:55.742052 master-0 kubenswrapper[31456]: I0312 21:10:55.741914 31456 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:10:55.742244 master-0 kubenswrapper[31456]: I0312 21:10:55.742064 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:55.744706 master-0 kubenswrapper[31456]: I0312 21:10:55.744245 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" containerID="cri-o://78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50" gracePeriod=15 Mar 12 21:10:55.744706 master-0 kubenswrapper[31456]: I0312 21:10:55.744316 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" containerID="cri-o://570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5" gracePeriod=15 Mar 12 21:10:55.744706 master-0 kubenswrapper[31456]: I0312 21:10:55.744415 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339" gracePeriod=15 Mar 12 21:10:55.744706 master-0 kubenswrapper[31456]: I0312 21:10:55.744475 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" containerID="cri-o://04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e" gracePeriod=15 Mar 12 21:10:55.744706 master-0 kubenswrapper[31456]: I0312 21:10:55.744400 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447" gracePeriod=15 Mar 12 21:10:55.745607 master-0 kubenswrapper[31456]: I0312 21:10:55.744987 31456 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.745720 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.745759 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.745844 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.745867 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.745902 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.745919 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.745940 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.745957 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.745982 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.745998 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.746026 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746046 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746426 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746700 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746731 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746763 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746795 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.746864 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: E0312 21:10:55.747162 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 21:10:55.748058 master-0 kubenswrapper[31456]: I0312 21:10:55.747187 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 12 21:10:55.863907 master-0 kubenswrapper[31456]: E0312 21:10:55.861752 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:55.916444 master-0 kubenswrapper[31456]: I0312 21:10:55.916366 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:55.916444 master-0 kubenswrapper[31456]: I0312 21:10:55.916438 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:55.916706 master-0 kubenswrapper[31456]: I0312 21:10:55.916478 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:55.916706 master-0 kubenswrapper[31456]: I0312 21:10:55.916569 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:55.916706 master-0 kubenswrapper[31456]: I0312 21:10:55.916695 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:55.916855 master-0 kubenswrapper[31456]: I0312 21:10:55.916734 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:55.916855 master-0 kubenswrapper[31456]: I0312 21:10:55.916762 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:55.917006 master-0 kubenswrapper[31456]: I0312 21:10:55.916957 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.018943 master-0 kubenswrapper[31456]: I0312 21:10:56.018724 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.018943 master-0 kubenswrapper[31456]: I0312 21:10:56.018779 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:56.018943 master-0 kubenswrapper[31456]: I0312 21:10:56.018824 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:56.018943 master-0 kubenswrapper[31456]: I0312 21:10:56.018852 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.018979 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019120 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019186 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019240 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019271 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019270 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019325 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019360 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019372 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.019412 master-0 kubenswrapper[31456]: I0312 21:10:56.019403 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.020134 master-0 kubenswrapper[31456]: I0312 21:10:56.019443 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:56.020134 master-0 kubenswrapper[31456]: I0312 21:10:56.019505 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.163370 master-0 kubenswrapper[31456]: I0312 21:10:56.163304 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:56.205461 master-0 kubenswrapper[31456]: W0312 21:10:56.205359 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a18cac8a90d6913a6a0391d805cddc9.slice/crio-ee71f13c5123094634412d21d0c8a8173c27a712a45ba2551ec0bd791c7d40f4 WatchSource:0}: Error finding container ee71f13c5123094634412d21d0c8a8173c27a712a45ba2551ec0bd791c7d40f4: Status 404 returned error can't find the container with id ee71f13c5123094634412d21d0c8a8173c27a712a45ba2551ec0bd791c7d40f4 Mar 12 21:10:56.210790 master-0 kubenswrapper[31456]: E0312 21:10:56.210570 31456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c344c71d64815 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:3a18cac8a90d6913a6a0391d805cddc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:10:56.209160213 +0000 UTC m=+117.283765551,LastTimestamp:2026-03-12 21:10:56.209160213 +0000 UTC m=+117.283765551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:10:56.468137 master-0 kubenswrapper[31456]: I0312 21:10:56.468072 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 12 21:10:56.470449 master-0 kubenswrapper[31456]: I0312 21:10:56.470394 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 21:10:56.471562 master-0 kubenswrapper[31456]: I0312 21:10:56.471491 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5" exitCode=0 Mar 12 21:10:56.471562 master-0 kubenswrapper[31456]: I0312 21:10:56.471552 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339" exitCode=0 Mar 12 21:10:56.471745 master-0 kubenswrapper[31456]: I0312 21:10:56.471574 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447" exitCode=0 Mar 12 21:10:56.471745 master-0 kubenswrapper[31456]: I0312 21:10:56.471599 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e" exitCode=2 Mar 12 21:10:56.471745 master-0 kubenswrapper[31456]: I0312 21:10:56.471680 31456 scope.go:117] "RemoveContainer" containerID="1867cbd1eea641a204f5d8db13d19bc48d06f54cf7a7cbc0d8d91fbb925b3a69" Mar 12 21:10:56.473456 master-0 kubenswrapper[31456]: I0312 21:10:56.473396 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"ee71f13c5123094634412d21d0c8a8173c27a712a45ba2551ec0bd791c7d40f4"} Mar 12 21:10:57.239985 master-0 kubenswrapper[31456]: E0312 21:10:57.239877 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:57.241130 master-0 kubenswrapper[31456]: E0312 21:10:57.241057 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:57.242036 master-0 kubenswrapper[31456]: E0312 21:10:57.241959 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:57.242941 master-0 kubenswrapper[31456]: E0312 21:10:57.242876 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:57.243757 master-0 kubenswrapper[31456]: E0312 21:10:57.243683 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:57.243757 master-0 kubenswrapper[31456]: I0312 21:10:57.243736 31456 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 21:10:57.244588 master-0 kubenswrapper[31456]: E0312 21:10:57.244523 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 21:10:57.446163 master-0 kubenswrapper[31456]: E0312 21:10:57.446037 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 21:10:57.491577 master-0 kubenswrapper[31456]: I0312 21:10:57.491320 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 21:10:57.495113 master-0 kubenswrapper[31456]: I0312 21:10:57.495044 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765"} Mar 12 21:10:57.496938 master-0 kubenswrapper[31456]: E0312 21:10:57.496868 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:57.847845 master-0 kubenswrapper[31456]: E0312 21:10:57.847720 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 21:10:58.246949 master-0 kubenswrapper[31456]: I0312 21:10:58.246892 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 21:10:58.248084 master-0 kubenswrapper[31456]: I0312 21:10:58.248057 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:58.249475 master-0 kubenswrapper[31456]: I0312 21:10:58.249403 31456 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:58.359395 master-0 kubenswrapper[31456]: I0312 21:10:58.359233 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 12 21:10:58.359395 master-0 kubenswrapper[31456]: I0312 21:10:58.359307 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 12 21:10:58.359395 master-0 kubenswrapper[31456]: I0312 21:10:58.359343 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 12 21:10:58.359786 master-0 kubenswrapper[31456]: I0312 21:10:58.359415 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:10:58.359786 master-0 kubenswrapper[31456]: I0312 21:10:58.359498 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:10:58.359786 master-0 kubenswrapper[31456]: I0312 21:10:58.359573 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:10:58.359786 master-0 kubenswrapper[31456]: I0312 21:10:58.359762 31456 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:58.359786 master-0 kubenswrapper[31456]: I0312 21:10:58.359786 31456 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:58.359786 master-0 kubenswrapper[31456]: I0312 21:10:58.359831 31456 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:10:58.513872 master-0 kubenswrapper[31456]: I0312 21:10:58.513792 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 12 21:10:58.514696 master-0 kubenswrapper[31456]: I0312 21:10:58.514627 31456 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50" exitCode=0 Mar 12 21:10:58.514957 master-0 kubenswrapper[31456]: I0312 21:10:58.514722 31456 scope.go:117] "RemoveContainer" containerID="570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5" Mar 12 21:10:58.514957 master-0 kubenswrapper[31456]: I0312 21:10:58.514740 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:10:58.515690 master-0 kubenswrapper[31456]: E0312 21:10:58.515623 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:10:58.534985 master-0 kubenswrapper[31456]: I0312 21:10:58.534909 31456 scope.go:117] "RemoveContainer" containerID="ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339" Mar 12 21:10:58.547525 master-0 kubenswrapper[31456]: I0312 21:10:58.547434 31456 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:58.558221 master-0 kubenswrapper[31456]: I0312 21:10:58.558167 31456 scope.go:117] "RemoveContainer" containerID="0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447" Mar 12 21:10:58.577121 master-0 kubenswrapper[31456]: I0312 21:10:58.577066 31456 scope.go:117] "RemoveContainer" containerID="04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e" Mar 12 21:10:58.595524 master-0 kubenswrapper[31456]: I0312 21:10:58.595487 31456 scope.go:117] "RemoveContainer" containerID="78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50" Mar 12 21:10:58.620163 master-0 kubenswrapper[31456]: I0312 21:10:58.620122 31456 scope.go:117] "RemoveContainer" containerID="52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2" Mar 12 21:10:58.640246 master-0 kubenswrapper[31456]: I0312 21:10:58.640213 31456 scope.go:117] "RemoveContainer" containerID="570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5" Mar 12 21:10:58.640776 master-0 kubenswrapper[31456]: E0312 21:10:58.640711 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5\": container with ID starting with 570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5 not found: ID does not exist" containerID="570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5" Mar 12 21:10:58.640776 master-0 kubenswrapper[31456]: I0312 21:10:58.640762 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5"} err="failed to get container status \"570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5\": rpc error: code = NotFound desc = could not find container \"570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5\": container with ID starting with 570d863fa6b395d16e1f5a331863494900f47673d925208e70bc1d1081f3b9d5 not found: ID does not exist" Mar 12 21:10:58.641061 master-0 kubenswrapper[31456]: I0312 21:10:58.640791 31456 scope.go:117] "RemoveContainer" containerID="ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339" Mar 12 21:10:58.641297 master-0 kubenswrapper[31456]: E0312 21:10:58.641247 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339\": container with ID starting with ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339 not found: ID does not exist" containerID="ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339" Mar 12 21:10:58.641297 master-0 kubenswrapper[31456]: I0312 21:10:58.641284 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339"} err="failed to get container status \"ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339\": rpc error: code = NotFound desc = could not find container \"ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339\": container with ID starting with ddc570d95acec84b08471105156342249118106b435695f1badc9f7a2232d339 not found: ID does not exist" Mar 12 21:10:58.641463 master-0 kubenswrapper[31456]: I0312 21:10:58.641306 31456 scope.go:117] "RemoveContainer" containerID="0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447" Mar 12 21:10:58.641598 master-0 kubenswrapper[31456]: E0312 21:10:58.641546 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447\": container with ID starting with 0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447 not found: ID does not exist" containerID="0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447" Mar 12 21:10:58.641681 master-0 kubenswrapper[31456]: I0312 21:10:58.641603 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447"} err="failed to get container status \"0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447\": rpc error: code = NotFound desc = could not find container \"0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447\": container with ID starting with 0845e7aef44f13460897c051d69b9fc344426906701d1496cc6673dd26243447 not found: ID does not exist" Mar 12 21:10:58.641681 master-0 kubenswrapper[31456]: I0312 21:10:58.641642 31456 scope.go:117] "RemoveContainer" containerID="04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e" Mar 12 21:10:58.642350 master-0 kubenswrapper[31456]: E0312 21:10:58.642299 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e\": container with ID starting with 04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e not found: ID does not exist" containerID="04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e" Mar 12 21:10:58.642350 master-0 kubenswrapper[31456]: I0312 21:10:58.642332 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e"} err="failed to get container status \"04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e\": rpc error: code = NotFound desc = could not find container \"04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e\": container with ID starting with 04597b2715ae95f58af55df14000ea14c61393b1e3b42149a8be2f89e6b9f26e not found: ID does not exist" Mar 12 21:10:58.642350 master-0 kubenswrapper[31456]: I0312 21:10:58.642348 31456 scope.go:117] "RemoveContainer" containerID="78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50" Mar 12 21:10:58.642907 master-0 kubenswrapper[31456]: E0312 21:10:58.642795 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50\": container with ID starting with 78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50 not found: ID does not exist" containerID="78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50" Mar 12 21:10:58.642907 master-0 kubenswrapper[31456]: I0312 21:10:58.642829 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50"} err="failed to get container status \"78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50\": rpc error: code = NotFound desc = could not find container \"78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50\": container with ID starting with 78d6b166dcab5df7019e2a3ab78a2ffecd20c5ee5d9fbeedec93a5d8114e7e50 not found: ID does not exist" Mar 12 21:10:58.642907 master-0 kubenswrapper[31456]: I0312 21:10:58.642843 31456 scope.go:117] "RemoveContainer" containerID="52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2" Mar 12 21:10:58.643244 master-0 kubenswrapper[31456]: E0312 21:10:58.643137 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2\": container with ID starting with 52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2 not found: ID does not exist" containerID="52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2" Mar 12 21:10:58.643244 master-0 kubenswrapper[31456]: I0312 21:10:58.643180 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2"} err="failed to get container status \"52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2\": rpc error: code = NotFound desc = could not find container \"52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2\": container with ID starting with 52f8cc40b0daf7f102ea6364b20a287ac9f811651bcaf6ef7554a793bf5238c2 not found: ID does not exist" Mar 12 21:10:58.649433 master-0 kubenswrapper[31456]: E0312 21:10:58.649356 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 21:10:59.175387 master-0 kubenswrapper[31456]: I0312 21:10:59.175284 31456 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:10:59.184759 master-0 kubenswrapper[31456]: I0312 21:10:59.184695 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077dd10388b9e3e48a07382126e86621" path="/var/lib/kubelet/pods/077dd10388b9e3e48a07382126e86621/volumes" Mar 12 21:11:00.251505 master-0 kubenswrapper[31456]: E0312 21:11:00.251418 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 21:11:01.549446 master-0 kubenswrapper[31456]: I0312 21:11:01.549400 31456 generic.go:334] "Generic (PLEG): container finished" podID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" containerID="9e1af043aa12da3cbcaf60b93ff0933d2f01ed7323a32f1d50d891b766078ce1" exitCode=0 Mar 12 21:11:01.550165 master-0 kubenswrapper[31456]: I0312 21:11:01.549494 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f2acf6cf-3f66-48a3-b424-0ecdcfc21146","Type":"ContainerDied","Data":"9e1af043aa12da3cbcaf60b93ff0933d2f01ed7323a32f1d50d891b766078ce1"} Mar 12 21:11:01.551382 master-0 kubenswrapper[31456]: I0312 21:11:01.551324 31456 status_manager.go:851] "Failed to get status for pod" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:01.678375 master-0 kubenswrapper[31456]: E0312 21:11:01.678243 31456 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:11:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:11:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:11:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T21:11:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:01.679279 master-0 kubenswrapper[31456]: E0312 21:11:01.679234 31456 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:01.680168 master-0 kubenswrapper[31456]: E0312 21:11:01.680132 31456 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:01.680981 master-0 kubenswrapper[31456]: E0312 21:11:01.680941 31456 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:01.681798 master-0 kubenswrapper[31456]: E0312 21:11:01.681744 31456 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:01.681904 master-0 kubenswrapper[31456]: E0312 21:11:01.681798 31456 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 21:11:03.214573 master-0 kubenswrapper[31456]: I0312 21:11:03.214483 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:11:03.216131 master-0 kubenswrapper[31456]: I0312 21:11:03.216047 31456 status_manager.go:851] "Failed to get status for pod" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:03.319991 master-0 kubenswrapper[31456]: E0312 21:11:03.319718 31456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c344c71d64815 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:3a18cac8a90d6913a6a0391d805cddc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:10:56.209160213 +0000 UTC m=+117.283765551,LastTimestamp:2026-03-12 21:10:56.209160213 +0000 UTC m=+117.283765551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:11:03.346215 master-0 kubenswrapper[31456]: I0312 21:11:03.346121 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kubelet-dir\") pod \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " Mar 12 21:11:03.346215 master-0 kubenswrapper[31456]: I0312 21:11:03.346213 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kube-api-access\") pod \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " Mar 12 21:11:03.346426 master-0 kubenswrapper[31456]: I0312 21:11:03.346231 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2acf6cf-3f66-48a3-b424-0ecdcfc21146" (UID: "f2acf6cf-3f66-48a3-b424-0ecdcfc21146"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:03.346426 master-0 kubenswrapper[31456]: I0312 21:11:03.346340 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-var-lock\") pod \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\" (UID: \"f2acf6cf-3f66-48a3-b424-0ecdcfc21146\") " Mar 12 21:11:03.346571 master-0 kubenswrapper[31456]: I0312 21:11:03.346500 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-var-lock" (OuterVolumeSpecName: "var-lock") pod "f2acf6cf-3f66-48a3-b424-0ecdcfc21146" (UID: "f2acf6cf-3f66-48a3-b424-0ecdcfc21146"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:03.346932 master-0 kubenswrapper[31456]: I0312 21:11:03.346884 31456 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:03.346932 master-0 kubenswrapper[31456]: I0312 21:11:03.346922 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:03.351322 master-0 kubenswrapper[31456]: I0312 21:11:03.351257 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2acf6cf-3f66-48a3-b424-0ecdcfc21146" (UID: "f2acf6cf-3f66-48a3-b424-0ecdcfc21146"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:11:03.449020 master-0 kubenswrapper[31456]: I0312 21:11:03.448854 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2acf6cf-3f66-48a3-b424-0ecdcfc21146-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:03.453285 master-0 kubenswrapper[31456]: E0312 21:11:03.453172 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 12 21:11:03.568899 master-0 kubenswrapper[31456]: I0312 21:11:03.568774 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f2acf6cf-3f66-48a3-b424-0ecdcfc21146","Type":"ContainerDied","Data":"a2cc745482d73b22f7fdc95f60a16c9ce4612d3863485f6ea45b13b9fb9c3930"} Mar 12 21:11:03.568899 master-0 kubenswrapper[31456]: I0312 21:11:03.568873 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2cc745482d73b22f7fdc95f60a16c9ce4612d3863485f6ea45b13b9fb9c3930" Mar 12 21:11:03.569154 master-0 kubenswrapper[31456]: I0312 21:11:03.568913 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 12 21:11:03.607899 master-0 kubenswrapper[31456]: I0312 21:11:03.607763 31456 status_manager.go:851] "Failed to get status for pod" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:08.169559 master-0 kubenswrapper[31456]: I0312 21:11:08.169490 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:08.171143 master-0 kubenswrapper[31456]: I0312 21:11:08.171064 31456 status_manager.go:851] "Failed to get status for pod" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:08.197067 master-0 kubenswrapper[31456]: I0312 21:11:08.196986 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:08.197067 master-0 kubenswrapper[31456]: I0312 21:11:08.197046 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:08.198184 master-0 kubenswrapper[31456]: E0312 21:11:08.198112 31456 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:08.198930 master-0 kubenswrapper[31456]: I0312 21:11:08.198884 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:08.235262 master-0 kubenswrapper[31456]: W0312 21:11:08.235112 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48512e02022680c9d90092634f0fc146.slice/crio-5a1d7ddddc538b1c426bba8765a7c8775fb9311c332e302f645acba8c61fcf5a WatchSource:0}: Error finding container 5a1d7ddddc538b1c426bba8765a7c8775fb9311c332e302f645acba8c61fcf5a: Status 404 returned error can't find the container with id 5a1d7ddddc538b1c426bba8765a7c8775fb9311c332e302f645acba8c61fcf5a Mar 12 21:11:08.617659 master-0 kubenswrapper[31456]: I0312 21:11:08.617599 31456 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60" exitCode=0 Mar 12 21:11:08.617659 master-0 kubenswrapper[31456]: I0312 21:11:08.617657 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60"} Mar 12 21:11:08.617959 master-0 kubenswrapper[31456]: I0312 21:11:08.617692 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"5a1d7ddddc538b1c426bba8765a7c8775fb9311c332e302f645acba8c61fcf5a"} Mar 12 21:11:08.618065 master-0 kubenswrapper[31456]: I0312 21:11:08.618028 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:08.618065 master-0 kubenswrapper[31456]: I0312 21:11:08.618045 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:08.619338 master-0 kubenswrapper[31456]: E0312 21:11:08.619252 31456 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:08.619506 master-0 kubenswrapper[31456]: I0312 21:11:08.619412 31456 status_manager.go:851] "Failed to get status for pod" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:11:09.629644 master-0 kubenswrapper[31456]: I0312 21:11:09.629569 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f"} Mar 12 21:11:09.629644 master-0 kubenswrapper[31456]: I0312 21:11:09.629643 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae"} Mar 12 21:11:09.630276 master-0 kubenswrapper[31456]: I0312 21:11:09.629664 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21"} Mar 12 21:11:10.639449 master-0 kubenswrapper[31456]: I0312 21:11:10.639388 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b"} Mar 12 21:11:10.639449 master-0 kubenswrapper[31456]: I0312 21:11:10.639432 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98"} Mar 12 21:11:10.640192 master-0 kubenswrapper[31456]: I0312 21:11:10.639593 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:10.640192 master-0 kubenswrapper[31456]: I0312 21:11:10.639792 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:10.640192 master-0 kubenswrapper[31456]: I0312 21:11:10.639877 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:10.641064 master-0 kubenswrapper[31456]: I0312 21:11:10.641034 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:11:10.642303 master-0 kubenswrapper[31456]: I0312 21:11:10.642279 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/0.log" Mar 12 21:11:10.642489 master-0 kubenswrapper[31456]: I0312 21:11:10.642447 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="d3c7faffe68717f40a0072b4ab6a64ec7cccad22e04a4674b15d395e19ec5ebe" exitCode=1 Mar 12 21:11:10.642587 master-0 kubenswrapper[31456]: I0312 21:11:10.642520 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"d3c7faffe68717f40a0072b4ab6a64ec7cccad22e04a4674b15d395e19ec5ebe"} Mar 12 21:11:10.643197 master-0 kubenswrapper[31456]: I0312 21:11:10.643180 31456 scope.go:117] "RemoveContainer" containerID="d3c7faffe68717f40a0072b4ab6a64ec7cccad22e04a4674b15d395e19ec5ebe" Mar 12 21:11:11.659671 master-0 kubenswrapper[31456]: I0312 21:11:11.659609 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:11:11.660857 master-0 kubenswrapper[31456]: I0312 21:11:11.660704 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/0.log" Mar 12 21:11:11.660857 master-0 kubenswrapper[31456]: I0312 21:11:11.660776 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6"} Mar 12 21:11:13.199020 master-0 kubenswrapper[31456]: I0312 21:11:13.198947 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:13.199853 master-0 kubenswrapper[31456]: I0312 21:11:13.199355 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:13.207593 master-0 kubenswrapper[31456]: I0312 21:11:13.207549 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:13.403548 master-0 kubenswrapper[31456]: I0312 21:11:13.403498 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:11:15.657543 master-0 kubenswrapper[31456]: I0312 21:11:15.657489 31456 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:15.702142 master-0 kubenswrapper[31456]: I0312 21:11:15.702068 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:15.702142 master-0 kubenswrapper[31456]: I0312 21:11:15.702136 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:15.707049 master-0 kubenswrapper[31456]: I0312 21:11:15.707015 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:15.709642 master-0 kubenswrapper[31456]: I0312 21:11:15.709151 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="48512e02022680c9d90092634f0fc146" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:11:16.709542 master-0 kubenswrapper[31456]: I0312 21:11:16.709456 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:16.709542 master-0 kubenswrapper[31456]: I0312 21:11:16.709510 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="c97ef423-41d5-4b0b-9002-c15ebea6560f" Mar 12 21:11:17.467903 master-0 kubenswrapper[31456]: I0312 21:11:17.467786 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:11:17.475878 master-0 kubenswrapper[31456]: I0312 21:11:17.475773 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:11:17.673297 master-0 kubenswrapper[31456]: I0312 21:11:17.673217 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:11:17.673610 master-0 kubenswrapper[31456]: E0312 21:11:17.673462 31456 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:11:17.673610 master-0 kubenswrapper[31456]: E0312 21:11:17.673507 31456 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:11:17.673610 master-0 kubenswrapper[31456]: E0312 21:11:17.673598 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access podName:222b53b1-7e5c-49d5-9795-fec4d0547398 nodeName:}" failed. No retries permitted until 2026-03-12 21:13:19.673566324 +0000 UTC m=+260.748171692 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access") pod "installer-3-master-0" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 12 21:11:19.195648 master-0 kubenswrapper[31456]: I0312 21:11:19.195547 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="48512e02022680c9d90092634f0fc146" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:11:23.410585 master-0 kubenswrapper[31456]: I0312 21:11:23.410502 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:11:25.845684 master-0 kubenswrapper[31456]: I0312 21:11:25.845561 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pvnjq" Mar 12 21:11:26.062317 master-0 kubenswrapper[31456]: I0312 21:11:26.062248 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 21:11:26.178744 master-0 kubenswrapper[31456]: I0312 21:11:26.178620 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vr86d" Mar 12 21:11:26.299064 master-0 kubenswrapper[31456]: I0312 21:11:26.298981 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 21:11:26.893754 master-0 kubenswrapper[31456]: I0312 21:11:26.893664 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 21:11:27.126374 master-0 kubenswrapper[31456]: I0312 21:11:27.126295 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 21:11:27.208014 master-0 kubenswrapper[31456]: I0312 21:11:27.207876 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 21:11:27.509961 master-0 kubenswrapper[31456]: I0312 21:11:27.509698 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 21:11:27.637270 master-0 kubenswrapper[31456]: I0312 21:11:27.637183 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 21:11:27.640979 master-0 kubenswrapper[31456]: I0312 21:11:27.640910 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 21:11:28.287060 master-0 kubenswrapper[31456]: I0312 21:11:28.286990 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-kj7kz" Mar 12 21:11:28.317420 master-0 kubenswrapper[31456]: I0312 21:11:28.317365 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 21:11:28.341248 master-0 kubenswrapper[31456]: I0312 21:11:28.341173 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 21:11:28.364588 master-0 kubenswrapper[31456]: I0312 21:11:28.364503 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 21:11:28.411549 master-0 kubenswrapper[31456]: I0312 21:11:28.411485 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 21:11:28.447461 master-0 kubenswrapper[31456]: I0312 21:11:28.447406 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 21:11:28.453851 master-0 kubenswrapper[31456]: I0312 21:11:28.453769 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 21:11:28.566795 master-0 kubenswrapper[31456]: I0312 21:11:28.566738 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 21:11:28.815915 master-0 kubenswrapper[31456]: I0312 21:11:28.810984 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 21:11:28.918522 master-0 kubenswrapper[31456]: I0312 21:11:28.918457 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 21:11:29.009309 master-0 kubenswrapper[31456]: I0312 21:11:29.009243 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 21:11:29.079884 master-0 kubenswrapper[31456]: I0312 21:11:29.078806 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 21:11:29.293421 master-0 kubenswrapper[31456]: I0312 21:11:29.293251 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 21:11:29.326146 master-0 kubenswrapper[31456]: I0312 21:11:29.326094 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 21:11:29.383117 master-0 kubenswrapper[31456]: I0312 21:11:29.383052 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 21:11:29.445436 master-0 kubenswrapper[31456]: I0312 21:11:29.445358 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 21:11:29.517508 master-0 kubenswrapper[31456]: I0312 21:11:29.517433 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 21:11:29.564545 master-0 kubenswrapper[31456]: I0312 21:11:29.564408 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 21:11:29.803127 master-0 kubenswrapper[31456]: I0312 21:11:29.803071 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 21:11:29.807469 master-0 kubenswrapper[31456]: I0312 21:11:29.807322 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 21:11:29.822850 master-0 kubenswrapper[31456]: I0312 21:11:29.820866 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 21:11:29.829100 master-0 kubenswrapper[31456]: I0312 21:11:29.825610 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 21:11:29.829100 master-0 kubenswrapper[31456]: I0312 21:11:29.827678 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 21:11:30.019061 master-0 kubenswrapper[31456]: I0312 21:11:30.018992 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 21:11:30.150839 master-0 kubenswrapper[31456]: I0312 21:11:30.150672 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 21:11:30.198516 master-0 kubenswrapper[31456]: I0312 21:11:30.198422 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 21:11:30.209895 master-0 kubenswrapper[31456]: I0312 21:11:30.209779 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 21:11:30.301490 master-0 kubenswrapper[31456]: I0312 21:11:30.301435 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 21:11:30.362215 master-0 kubenswrapper[31456]: I0312 21:11:30.362150 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 21:11:30.427028 master-0 kubenswrapper[31456]: I0312 21:11:30.426859 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 21:11:30.590270 master-0 kubenswrapper[31456]: I0312 21:11:30.590164 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 21:11:30.626959 master-0 kubenswrapper[31456]: I0312 21:11:30.626866 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 21:11:30.646207 master-0 kubenswrapper[31456]: I0312 21:11:30.646121 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-zfxcx" Mar 12 21:11:30.688269 master-0 kubenswrapper[31456]: I0312 21:11:30.688126 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-p5qt4" Mar 12 21:11:30.733831 master-0 kubenswrapper[31456]: I0312 21:11:30.733736 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 21:11:30.738406 master-0 kubenswrapper[31456]: I0312 21:11:30.738361 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 21:11:30.825110 master-0 kubenswrapper[31456]: I0312 21:11:30.825056 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 21:11:30.837925 master-0 kubenswrapper[31456]: I0312 21:11:30.837807 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 21:11:30.856375 master-0 kubenswrapper[31456]: I0312 21:11:30.856321 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 21:11:30.870088 master-0 kubenswrapper[31456]: I0312 21:11:30.870023 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 21:11:30.952595 master-0 kubenswrapper[31456]: I0312 21:11:30.952452 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 21:11:30.960107 master-0 kubenswrapper[31456]: I0312 21:11:30.960069 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 21:11:30.969443 master-0 kubenswrapper[31456]: I0312 21:11:30.969382 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 21:11:30.984395 master-0 kubenswrapper[31456]: I0312 21:11:30.984284 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-7875j" Mar 12 21:11:31.060448 master-0 kubenswrapper[31456]: I0312 21:11:31.060365 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 21:11:31.130986 master-0 kubenswrapper[31456]: I0312 21:11:31.130914 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 21:11:31.160368 master-0 kubenswrapper[31456]: I0312 21:11:31.160303 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 21:11:31.222735 master-0 kubenswrapper[31456]: I0312 21:11:31.222587 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 21:11:31.274407 master-0 kubenswrapper[31456]: I0312 21:11:31.272027 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 21:11:31.274407 master-0 kubenswrapper[31456]: I0312 21:11:31.273052 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 21:11:31.404855 master-0 kubenswrapper[31456]: I0312 21:11:31.404770 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 21:11:31.466151 master-0 kubenswrapper[31456]: I0312 21:11:31.466071 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 21:11:31.655508 master-0 kubenswrapper[31456]: I0312 21:11:31.655430 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:11:31.690596 master-0 kubenswrapper[31456]: I0312 21:11:31.690503 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 21:11:31.720680 master-0 kubenswrapper[31456]: I0312 21:11:31.720581 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vmm2r" Mar 12 21:11:31.765088 master-0 kubenswrapper[31456]: I0312 21:11:31.764993 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 21:11:31.805149 master-0 kubenswrapper[31456]: I0312 21:11:31.805044 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 21:11:31.846515 master-0 kubenswrapper[31456]: I0312 21:11:31.846446 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 21:11:31.854912 master-0 kubenswrapper[31456]: I0312 21:11:31.854846 31456 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 21:11:31.892974 master-0 kubenswrapper[31456]: I0312 21:11:31.892801 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 21:11:31.895952 master-0 kubenswrapper[31456]: I0312 21:11:31.894661 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 21:11:32.005920 master-0 kubenswrapper[31456]: I0312 21:11:32.005794 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 21:11:32.081006 master-0 kubenswrapper[31456]: I0312 21:11:32.080945 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-rgtlp" Mar 12 21:11:32.181374 master-0 kubenswrapper[31456]: I0312 21:11:32.181297 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 21:11:32.254921 master-0 kubenswrapper[31456]: I0312 21:11:32.254789 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 21:11:32.300587 master-0 kubenswrapper[31456]: I0312 21:11:32.300464 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 21:11:32.345603 master-0 kubenswrapper[31456]: I0312 21:11:32.345541 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 21:11:32.381269 master-0 kubenswrapper[31456]: I0312 21:11:32.381230 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 21:11:32.433737 master-0 kubenswrapper[31456]: I0312 21:11:32.433665 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-h7jv4" Mar 12 21:11:32.436895 master-0 kubenswrapper[31456]: I0312 21:11:32.436830 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 21:11:32.437099 master-0 kubenswrapper[31456]: I0312 21:11:32.437045 31456 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 21:11:32.442078 master-0 kubenswrapper[31456]: I0312 21:11:32.442043 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 21:11:32.471681 master-0 kubenswrapper[31456]: I0312 21:11:32.471645 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 21:11:32.501581 master-0 kubenswrapper[31456]: I0312 21:11:32.501488 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 21:11:32.517885 master-0 kubenswrapper[31456]: I0312 21:11:32.517608 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 21:11:32.655102 master-0 kubenswrapper[31456]: I0312 21:11:32.655031 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 21:11:32.753469 master-0 kubenswrapper[31456]: I0312 21:11:32.753387 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 21:11:32.758569 master-0 kubenswrapper[31456]: I0312 21:11:32.758524 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 21:11:32.761287 master-0 kubenswrapper[31456]: I0312 21:11:32.761201 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 21:11:32.786609 master-0 kubenswrapper[31456]: I0312 21:11:32.786526 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 21:11:32.838600 master-0 kubenswrapper[31456]: I0312 21:11:32.838500 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 21:11:32.865952 master-0 kubenswrapper[31456]: I0312 21:11:32.865846 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 21:11:32.876943 master-0 kubenswrapper[31456]: I0312 21:11:32.876758 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 21:11:32.913068 master-0 kubenswrapper[31456]: I0312 21:11:32.912886 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 21:11:32.963174 master-0 kubenswrapper[31456]: I0312 21:11:32.963110 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 21:11:32.965988 master-0 kubenswrapper[31456]: I0312 21:11:32.965948 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 21:11:33.003717 master-0 kubenswrapper[31456]: I0312 21:11:33.003641 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 21:11:33.035102 master-0 kubenswrapper[31456]: I0312 21:11:33.035020 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 21:11:33.040803 master-0 kubenswrapper[31456]: I0312 21:11:33.040747 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 21:11:33.058220 master-0 kubenswrapper[31456]: I0312 21:11:33.058162 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-qthpm" Mar 12 21:11:33.074836 master-0 kubenswrapper[31456]: I0312 21:11:33.074769 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 21:11:33.097240 master-0 kubenswrapper[31456]: I0312 21:11:33.097153 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 21:11:33.140640 master-0 kubenswrapper[31456]: I0312 21:11:33.140561 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 21:11:33.158268 master-0 kubenswrapper[31456]: I0312 21:11:33.158206 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 21:11:33.230970 master-0 kubenswrapper[31456]: I0312 21:11:33.230750 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 21:11:33.274884 master-0 kubenswrapper[31456]: I0312 21:11:33.272679 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 21:11:33.274884 master-0 kubenswrapper[31456]: I0312 21:11:33.274054 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 21:11:33.274884 master-0 kubenswrapper[31456]: I0312 21:11:33.274716 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 21:11:33.276649 master-0 kubenswrapper[31456]: I0312 21:11:33.276611 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-ct6dn" Mar 12 21:11:33.348485 master-0 kubenswrapper[31456]: I0312 21:11:33.348378 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 21:11:33.426291 master-0 kubenswrapper[31456]: I0312 21:11:33.426190 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-t5dxh" Mar 12 21:11:33.493380 master-0 kubenswrapper[31456]: I0312 21:11:33.493224 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-lrwqt" Mar 12 21:11:33.663797 master-0 kubenswrapper[31456]: I0312 21:11:33.663726 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 21:11:33.699288 master-0 kubenswrapper[31456]: I0312 21:11:33.699243 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 21:11:33.729467 master-0 kubenswrapper[31456]: I0312 21:11:33.729414 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 21:11:33.784340 master-0 kubenswrapper[31456]: I0312 21:11:33.784238 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 21:11:33.845862 master-0 kubenswrapper[31456]: I0312 21:11:33.845766 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-cdrqx" Mar 12 21:11:33.862915 master-0 kubenswrapper[31456]: I0312 21:11:33.861199 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 21:11:33.920186 master-0 kubenswrapper[31456]: I0312 21:11:33.920148 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-w9pdx" Mar 12 21:11:33.944352 master-0 kubenswrapper[31456]: I0312 21:11:33.944333 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 21:11:33.990068 master-0 kubenswrapper[31456]: I0312 21:11:33.990020 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 21:11:34.108253 master-0 kubenswrapper[31456]: I0312 21:11:34.108222 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 21:11:34.112242 master-0 kubenswrapper[31456]: I0312 21:11:34.112227 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 21:11:34.258147 master-0 kubenswrapper[31456]: I0312 21:11:34.258085 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 21:11:34.285768 master-0 kubenswrapper[31456]: I0312 21:11:34.285742 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 21:11:34.293220 master-0 kubenswrapper[31456]: I0312 21:11:34.293205 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 21:11:34.383396 master-0 kubenswrapper[31456]: I0312 21:11:34.383294 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 21:11:34.469565 master-0 kubenswrapper[31456]: I0312 21:11:34.469533 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-9n54f" Mar 12 21:11:34.512777 master-0 kubenswrapper[31456]: I0312 21:11:34.512719 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 21:11:34.576720 master-0 kubenswrapper[31456]: I0312 21:11:34.555460 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 21:11:34.666540 master-0 kubenswrapper[31456]: I0312 21:11:34.666418 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 21:11:34.684898 master-0 kubenswrapper[31456]: I0312 21:11:34.684862 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 21:11:34.745509 master-0 kubenswrapper[31456]: I0312 21:11:34.745474 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 21:11:34.773609 master-0 kubenswrapper[31456]: I0312 21:11:34.773572 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 21:11:34.784326 master-0 kubenswrapper[31456]: I0312 21:11:34.784293 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 21:11:34.822922 master-0 kubenswrapper[31456]: I0312 21:11:34.822885 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 21:11:34.872952 master-0 kubenswrapper[31456]: I0312 21:11:34.872884 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 21:11:34.914985 master-0 kubenswrapper[31456]: I0312 21:11:34.914953 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xgssr" Mar 12 21:11:34.918288 master-0 kubenswrapper[31456]: I0312 21:11:34.918215 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 21:11:34.983876 master-0 kubenswrapper[31456]: I0312 21:11:34.983842 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 21:11:35.023908 master-0 kubenswrapper[31456]: I0312 21:11:35.023873 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-f2k7z" Mar 12 21:11:35.024531 master-0 kubenswrapper[31456]: I0312 21:11:35.024493 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 21:11:35.061848 master-0 kubenswrapper[31456]: I0312 21:11:35.061822 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4jamj9cd05on6" Mar 12 21:11:35.083698 master-0 kubenswrapper[31456]: I0312 21:11:35.083680 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 21:11:35.103516 master-0 kubenswrapper[31456]: I0312 21:11:35.103492 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-7gthf" Mar 12 21:11:35.117544 master-0 kubenswrapper[31456]: I0312 21:11:35.117501 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 21:11:35.147390 master-0 kubenswrapper[31456]: I0312 21:11:35.147318 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 21:11:35.406762 master-0 kubenswrapper[31456]: I0312 21:11:35.406686 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 21:11:35.521447 master-0 kubenswrapper[31456]: I0312 21:11:35.521375 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 21:11:35.586201 master-0 kubenswrapper[31456]: I0312 21:11:35.586131 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-n68ff" Mar 12 21:11:35.593338 master-0 kubenswrapper[31456]: I0312 21:11:35.593292 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 21:11:35.666117 master-0 kubenswrapper[31456]: I0312 21:11:35.665971 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 21:11:35.713247 master-0 kubenswrapper[31456]: I0312 21:11:35.713166 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-62zgv" Mar 12 21:11:35.787912 master-0 kubenswrapper[31456]: I0312 21:11:35.787859 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 21:11:35.884554 master-0 kubenswrapper[31456]: I0312 21:11:35.884470 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 21:11:35.885688 master-0 kubenswrapper[31456]: E0312 21:11:35.885615 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" podUID="41520992-0499-4a93-bd1c-7814ffb84164" Mar 12 21:11:35.901140 master-0 kubenswrapper[31456]: I0312 21:11:35.898377 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 21:11:35.905552 master-0 kubenswrapper[31456]: I0312 21:11:35.905390 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 21:11:35.915379 master-0 kubenswrapper[31456]: I0312 21:11:35.915291 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 21:11:35.992029 master-0 kubenswrapper[31456]: I0312 21:11:35.989369 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mc5vw" Mar 12 21:11:36.020847 master-0 kubenswrapper[31456]: I0312 21:11:36.020771 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 21:11:36.264448 master-0 kubenswrapper[31456]: I0312 21:11:36.264285 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 21:11:36.277142 master-0 kubenswrapper[31456]: I0312 21:11:36.277073 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 21:11:36.324535 master-0 kubenswrapper[31456]: I0312 21:11:36.324425 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-7t6bk" Mar 12 21:11:36.426079 master-0 kubenswrapper[31456]: I0312 21:11:36.425991 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 21:11:36.566328 master-0 kubenswrapper[31456]: I0312 21:11:36.566266 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 21:11:36.700413 master-0 kubenswrapper[31456]: I0312 21:11:36.700314 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 21:11:36.728016 master-0 kubenswrapper[31456]: I0312 21:11:36.727959 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 21:11:36.757833 master-0 kubenswrapper[31456]: I0312 21:11:36.757706 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 21:11:36.781586 master-0 kubenswrapper[31456]: I0312 21:11:36.781535 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5j2qf" Mar 12 21:11:36.797111 master-0 kubenswrapper[31456]: I0312 21:11:36.797046 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 21:11:36.823773 master-0 kubenswrapper[31456]: I0312 21:11:36.823658 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 21:11:36.869928 master-0 kubenswrapper[31456]: I0312 21:11:36.869835 31456 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 21:11:36.874998 master-0 kubenswrapper[31456]: I0312 21:11:36.874941 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:11:36.875103 master-0 kubenswrapper[31456]: I0312 21:11:36.875008 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:11:36.879027 master-0 kubenswrapper[31456]: I0312 21:11:36.878973 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:11:36.882566 master-0 kubenswrapper[31456]: I0312 21:11:36.882526 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 21:11:36.898596 master-0 kubenswrapper[31456]: I0312 21:11:36.898520 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:11:36.901706 master-0 kubenswrapper[31456]: I0312 21:11:36.901633 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=21.901604575 podStartE2EDuration="21.901604575s" podCreationTimestamp="2026-03-12 21:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:11:36.89851855 +0000 UTC m=+157.973123958" watchObservedRunningTime="2026-03-12 21:11:36.901604575 +0000 UTC m=+157.976209893" Mar 12 21:11:37.233632 master-0 kubenswrapper[31456]: I0312 21:11:37.233499 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 21:11:37.258767 master-0 kubenswrapper[31456]: I0312 21:11:37.258672 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4"] Mar 12 21:11:37.259141 master-0 kubenswrapper[31456]: E0312 21:11:37.259099 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" containerName="installer" Mar 12 21:11:37.259141 master-0 kubenswrapper[31456]: I0312 21:11:37.259129 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" containerName="installer" Mar 12 21:11:37.259383 master-0 kubenswrapper[31456]: I0312 21:11:37.259347 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2acf6cf-3f66-48a3-b424-0ecdcfc21146" containerName="installer" Mar 12 21:11:37.262293 master-0 kubenswrapper[31456]: I0312 21:11:37.262234 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.264437 master-0 kubenswrapper[31456]: I0312 21:11:37.264347 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 21:11:37.265416 master-0 kubenswrapper[31456]: I0312 21:11:37.265372 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 21:11:37.269909 master-0 kubenswrapper[31456]: I0312 21:11:37.265803 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 21:11:37.269909 master-0 kubenswrapper[31456]: I0312 21:11:37.266715 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 21:11:37.269909 master-0 kubenswrapper[31456]: I0312 21:11:37.267833 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 21:11:37.269909 master-0 kubenswrapper[31456]: I0312 21:11:37.268820 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-6n7kf9fsvodvc" Mar 12 21:11:37.281096 master-0 kubenswrapper[31456]: I0312 21:11:37.281034 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4"] Mar 12 21:11:37.321270 master-0 kubenswrapper[31456]: I0312 21:11:37.320849 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 21:11:37.353841 master-0 kubenswrapper[31456]: I0312 21:11:37.353746 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-metrics-client-ca\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354022 master-0 kubenswrapper[31456]: I0312 21:11:37.353869 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354022 master-0 kubenswrapper[31456]: I0312 21:11:37.353965 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-tls\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354022 master-0 kubenswrapper[31456]: I0312 21:11:37.354009 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mx56\" (UniqueName: \"kubernetes.io/projected/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-kube-api-access-9mx56\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354177 master-0 kubenswrapper[31456]: I0312 21:11:37.354050 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354483 master-0 kubenswrapper[31456]: I0312 21:11:37.354399 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354652 master-0 kubenswrapper[31456]: I0312 21:11:37.354605 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.354721 master-0 kubenswrapper[31456]: I0312 21:11:37.354650 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-grpc-tls\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.365932 master-0 kubenswrapper[31456]: I0312 21:11:37.365878 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 21:11:37.455466 master-0 kubenswrapper[31456]: I0312 21:11:37.455414 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-r4pnh" Mar 12 21:11:37.455728 master-0 kubenswrapper[31456]: I0312 21:11:37.455634 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.455728 master-0 kubenswrapper[31456]: I0312 21:11:37.455706 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-grpc-tls\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.455841 master-0 kubenswrapper[31456]: I0312 21:11:37.455774 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-metrics-client-ca\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.455841 master-0 kubenswrapper[31456]: I0312 21:11:37.455804 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.456246 master-0 kubenswrapper[31456]: I0312 21:11:37.456186 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-tls\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.456757 master-0 kubenswrapper[31456]: I0312 21:11:37.456397 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mx56\" (UniqueName: \"kubernetes.io/projected/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-kube-api-access-9mx56\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.456757 master-0 kubenswrapper[31456]: I0312 21:11:37.456453 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.456757 master-0 kubenswrapper[31456]: I0312 21:11:37.456577 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.456935 master-0 kubenswrapper[31456]: I0312 21:11:37.456892 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-metrics-client-ca\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.463884 master-0 kubenswrapper[31456]: I0312 21:11:37.460325 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.475150 master-0 kubenswrapper[31456]: I0312 21:11:37.474975 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.475344 master-0 kubenswrapper[31456]: I0312 21:11:37.475215 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 21:11:37.475344 master-0 kubenswrapper[31456]: I0312 21:11:37.475255 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-tls\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.475820 master-0 kubenswrapper[31456]: I0312 21:11:37.475760 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.475905 master-0 kubenswrapper[31456]: I0312 21:11:37.475845 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 21:11:37.475954 master-0 kubenswrapper[31456]: I0312 21:11:37.475909 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-grpc-tls\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.477871 master-0 kubenswrapper[31456]: I0312 21:11:37.477790 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.481413 master-0 kubenswrapper[31456]: I0312 21:11:37.481365 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 21:11:37.487882 master-0 kubenswrapper[31456]: I0312 21:11:37.486935 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mx56\" (UniqueName: \"kubernetes.io/projected/521ea6ff-1c6e-4633-8ded-b0ba87ab72b2-kube-api-access-9mx56\") pod \"thanos-querier-79fcdfff7b-hh7d4\" (UID: \"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2\") " pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.505312 master-0 kubenswrapper[31456]: I0312 21:11:37.505262 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 21:11:37.561029 master-0 kubenswrapper[31456]: I0312 21:11:37.560971 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 21:11:37.562155 master-0 kubenswrapper[31456]: I0312 21:11:37.562113 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 21:11:37.590394 master-0 kubenswrapper[31456]: I0312 21:11:37.590336 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:37.707327 master-0 kubenswrapper[31456]: I0312 21:11:37.707272 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 21:11:37.718487 master-0 kubenswrapper[31456]: I0312 21:11:37.718410 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 21:11:37.790763 master-0 kubenswrapper[31456]: I0312 21:11:37.789151 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 21:11:37.790763 master-0 kubenswrapper[31456]: I0312 21:11:37.789258 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 21:11:37.797036 master-0 kubenswrapper[31456]: I0312 21:11:37.795639 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 21:11:37.832633 master-0 kubenswrapper[31456]: I0312 21:11:37.821518 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 21:11:37.877640 master-0 kubenswrapper[31456]: I0312 21:11:37.877600 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 21:11:37.930091 master-0 kubenswrapper[31456]: I0312 21:11:37.929942 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 21:11:38.031240 master-0 kubenswrapper[31456]: I0312 21:11:38.031186 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4"] Mar 12 21:11:38.164746 master-0 kubenswrapper[31456]: I0312 21:11:38.164681 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 21:11:38.171095 master-0 kubenswrapper[31456]: I0312 21:11:38.171038 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 21:11:38.208464 master-0 kubenswrapper[31456]: I0312 21:11:38.208398 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 21:11:38.210957 master-0 kubenswrapper[31456]: I0312 21:11:38.210916 31456 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:11:38.211254 master-0 kubenswrapper[31456]: I0312 21:11:38.211211 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" containerID="cri-o://5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765" gracePeriod=5 Mar 12 21:11:38.219965 master-0 kubenswrapper[31456]: I0312 21:11:38.219929 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 21:11:38.252928 master-0 kubenswrapper[31456]: I0312 21:11:38.252891 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 21:11:38.278113 master-0 kubenswrapper[31456]: I0312 21:11:38.278070 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 21:11:38.282388 master-0 kubenswrapper[31456]: I0312 21:11:38.282358 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 21:11:38.350536 master-0 kubenswrapper[31456]: I0312 21:11:38.350485 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 21:11:38.389984 master-0 kubenswrapper[31456]: I0312 21:11:38.389945 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 21:11:38.403337 master-0 kubenswrapper[31456]: I0312 21:11:38.403304 31456 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 21:11:38.430181 master-0 kubenswrapper[31456]: I0312 21:11:38.430150 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 21:11:38.642048 master-0 kubenswrapper[31456]: I0312 21:11:38.641928 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 21:11:38.658270 master-0 kubenswrapper[31456]: I0312 21:11:38.658244 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-xjkth" Mar 12 21:11:38.799005 master-0 kubenswrapper[31456]: I0312 21:11:38.798931 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 21:11:38.867273 master-0 kubenswrapper[31456]: I0312 21:11:38.867234 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 21:11:38.913710 master-0 kubenswrapper[31456]: I0312 21:11:38.913574 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"81a1691246dfbb69a85bbd0fe275dce04439c61924f17c9c894ffd42a1951840"} Mar 12 21:11:38.916951 master-0 kubenswrapper[31456]: I0312 21:11:38.916887 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 21:11:38.996754 master-0 kubenswrapper[31456]: I0312 21:11:38.996694 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 21:11:39.090456 master-0 kubenswrapper[31456]: I0312 21:11:39.090162 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 21:11:39.201451 master-0 kubenswrapper[31456]: I0312 21:11:39.201263 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 21:11:39.270002 master-0 kubenswrapper[31456]: I0312 21:11:39.269963 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 21:11:39.388187 master-0 kubenswrapper[31456]: I0312 21:11:39.387933 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 21:11:39.454277 master-0 kubenswrapper[31456]: I0312 21:11:39.454108 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 21:11:39.631387 master-0 kubenswrapper[31456]: I0312 21:11:39.631306 31456 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 21:11:39.632559 master-0 kubenswrapper[31456]: I0312 21:11:39.632254 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 21:11:39.731433 master-0 kubenswrapper[31456]: I0312 21:11:39.731296 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 21:11:39.833226 master-0 kubenswrapper[31456]: I0312 21:11:39.833141 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-bk87n" Mar 12 21:11:39.880301 master-0 kubenswrapper[31456]: I0312 21:11:39.880225 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6ccccb478b-5r76x"] Mar 12 21:11:39.880586 master-0 kubenswrapper[31456]: E0312 21:11:39.880573 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 21:11:39.880670 master-0 kubenswrapper[31456]: I0312 21:11:39.880594 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 21:11:39.881155 master-0 kubenswrapper[31456]: I0312 21:11:39.881124 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 12 21:11:39.881841 master-0 kubenswrapper[31456]: I0312 21:11:39.881778 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.883841 master-0 kubenswrapper[31456]: I0312 21:11:39.883788 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-74bpcql1t9em9" Mar 12 21:11:39.892619 master-0 kubenswrapper[31456]: I0312 21:11:39.892558 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-5bbfd655db-2tsb8"] Mar 12 21:11:39.892971 master-0 kubenswrapper[31456]: I0312 21:11:39.892920 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" podUID="33beea0b-f77b-4388-a9c8-5710f084f961" containerName="metrics-server" containerID="cri-o://41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a" gracePeriod=170 Mar 12 21:11:39.895156 master-0 kubenswrapper[31456]: I0312 21:11:39.895088 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-audit-log\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.895264 master-0 kubenswrapper[31456]: I0312 21:11:39.895215 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-client-ca-bundle\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.895337 master-0 kubenswrapper[31456]: I0312 21:11:39.895322 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbvp7\" (UniqueName: \"kubernetes.io/projected/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-kube-api-access-zbvp7\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.895419 master-0 kubenswrapper[31456]: I0312 21:11:39.895360 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-secret-metrics-server-tls\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.895419 master-0 kubenswrapper[31456]: I0312 21:11:39.895403 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.895548 master-0 kubenswrapper[31456]: I0312 21:11:39.895431 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-secret-metrics-client-certs\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.895669 master-0 kubenswrapper[31456]: I0312 21:11:39.895613 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-metrics-server-audit-profiles\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.912875 master-0 kubenswrapper[31456]: I0312 21:11:39.912766 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6ccccb478b-5r76x"] Mar 12 21:11:39.915390 master-0 kubenswrapper[31456]: I0312 21:11:39.915341 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 21:11:39.921001 master-0 kubenswrapper[31456]: I0312 21:11:39.920956 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 21:11:39.949932 master-0 kubenswrapper[31456]: I0312 21:11:39.942382 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 21:11:39.997509 master-0 kubenswrapper[31456]: I0312 21:11:39.996903 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-client-ca-bundle\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.997509 master-0 kubenswrapper[31456]: I0312 21:11:39.997074 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbvp7\" (UniqueName: \"kubernetes.io/projected/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-kube-api-access-zbvp7\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.997509 master-0 kubenswrapper[31456]: I0312 21:11:39.997133 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-secret-metrics-server-tls\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.997509 master-0 kubenswrapper[31456]: I0312 21:11:39.997182 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.997509 master-0 kubenswrapper[31456]: I0312 21:11:39.997228 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-secret-metrics-client-certs\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.997509 master-0 kubenswrapper[31456]: I0312 21:11:39.997293 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-metrics-server-audit-profiles\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.999071 master-0 kubenswrapper[31456]: I0312 21:11:39.998773 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-audit-log\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:39.999071 master-0 kubenswrapper[31456]: I0312 21:11:39.998944 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-audit-log\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.002852 master-0 kubenswrapper[31456]: I0312 21:11:40.000452 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-metrics-server-audit-profiles\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.002852 master-0 kubenswrapper[31456]: I0312 21:11:40.000909 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.004511 master-0 kubenswrapper[31456]: I0312 21:11:40.004472 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-secret-metrics-server-tls\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.005191 master-0 kubenswrapper[31456]: I0312 21:11:40.005143 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-secret-metrics-client-certs\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.008116 master-0 kubenswrapper[31456]: I0312 21:11:40.008079 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 21:11:40.011031 master-0 kubenswrapper[31456]: I0312 21:11:40.010985 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-client-ca-bundle\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.026863 master-0 kubenswrapper[31456]: I0312 21:11:40.026453 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbvp7\" (UniqueName: \"kubernetes.io/projected/ccb03070-75ac-4cc7-9213-9a35d4e3f1c5-kube-api-access-zbvp7\") pod \"metrics-server-6ccccb478b-5r76x\" (UID: \"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5\") " pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.041202 master-0 kubenswrapper[31456]: I0312 21:11:40.041156 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 21:11:40.045854 master-0 kubenswrapper[31456]: I0312 21:11:40.045819 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 21:11:40.078739 master-0 kubenswrapper[31456]: I0312 21:11:40.078691 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-v7qw9" Mar 12 21:11:40.105647 master-0 kubenswrapper[31456]: I0312 21:11:40.105589 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 21:11:40.111093 master-0 kubenswrapper[31456]: I0312 21:11:40.110916 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 21:11:40.114920 master-0 kubenswrapper[31456]: I0312 21:11:40.114889 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 21:11:40.152250 master-0 kubenswrapper[31456]: I0312 21:11:40.152205 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 21:11:40.157636 master-0 kubenswrapper[31456]: I0312 21:11:40.157607 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 21:11:40.203801 master-0 kubenswrapper[31456]: I0312 21:11:40.203734 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:11:40.241783 master-0 kubenswrapper[31456]: I0312 21:11:40.241736 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 21:11:40.250581 master-0 kubenswrapper[31456]: I0312 21:11:40.250472 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 21:11:40.325825 master-0 kubenswrapper[31456]: I0312 21:11:40.325735 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 21:11:40.386518 master-0 kubenswrapper[31456]: I0312 21:11:40.386474 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 21:11:40.417243 master-0 kubenswrapper[31456]: I0312 21:11:40.417195 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 21:11:40.435303 master-0 kubenswrapper[31456]: I0312 21:11:40.435265 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 21:11:40.453456 master-0 kubenswrapper[31456]: I0312 21:11:40.453356 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 21:11:40.507851 master-0 kubenswrapper[31456]: I0312 21:11:40.507711 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 21:11:40.526896 master-0 kubenswrapper[31456]: I0312 21:11:40.526844 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 21:11:40.565514 master-0 kubenswrapper[31456]: I0312 21:11:40.565416 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 21:11:40.594159 master-0 kubenswrapper[31456]: I0312 21:11:40.594113 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-f29rj" Mar 12 21:11:40.621962 master-0 kubenswrapper[31456]: I0312 21:11:40.621898 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 21:11:40.634822 master-0 kubenswrapper[31456]: I0312 21:11:40.634775 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 21:11:40.659786 master-0 kubenswrapper[31456]: I0312 21:11:40.659739 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 21:11:40.781973 master-0 kubenswrapper[31456]: I0312 21:11:40.741067 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 21:11:40.781973 master-0 kubenswrapper[31456]: I0312 21:11:40.743919 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 21:11:40.811508 master-0 kubenswrapper[31456]: I0312 21:11:40.811455 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:11:40.811691 master-0 kubenswrapper[31456]: E0312 21:11:40.811645 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca podName:41520992-0499-4a93-bd1c-7814ffb84164 nodeName:}" failed. No retries permitted until 2026-03-12 21:13:42.811613232 +0000 UTC m=+283.886218560 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca") pod "console-operator-6c7fb6b958-2lj8z" (UID: "41520992-0499-4a93-bd1c-7814ffb84164") : configmap references non-existent config key: ca-bundle.crt Mar 12 21:11:40.873006 master-0 kubenswrapper[31456]: I0312 21:11:40.872901 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 21:11:40.876629 master-0 kubenswrapper[31456]: I0312 21:11:40.876599 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 21:11:40.932584 master-0 kubenswrapper[31456]: I0312 21:11:40.929873 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"8ecca95f27aed89ef6a09bc33e5c8d5eabc2a50f690563eea2e03cbc7cad66cf"} Mar 12 21:11:40.932584 master-0 kubenswrapper[31456]: I0312 21:11:40.932210 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 21:11:40.952197 master-0 kubenswrapper[31456]: I0312 21:11:40.952147 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:11:40.966470 master-0 kubenswrapper[31456]: I0312 21:11:40.966442 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 21:11:40.982353 master-0 kubenswrapper[31456]: I0312 21:11:40.982280 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 21:11:40.984339 master-0 kubenswrapper[31456]: I0312 21:11:40.984289 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-bxh97" Mar 12 21:11:41.233208 master-0 kubenswrapper[31456]: I0312 21:11:41.233129 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 21:11:41.339882 master-0 kubenswrapper[31456]: I0312 21:11:41.339768 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 21:11:41.344150 master-0 kubenswrapper[31456]: I0312 21:11:41.344091 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 21:11:41.424130 master-0 kubenswrapper[31456]: I0312 21:11:41.424044 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6ccccb478b-5r76x"] Mar 12 21:11:41.480946 master-0 kubenswrapper[31456]: I0312 21:11:41.480789 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 21:11:41.484996 master-0 kubenswrapper[31456]: I0312 21:11:41.484943 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 21:11:41.496748 master-0 kubenswrapper[31456]: I0312 21:11:41.496142 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 21:11:41.929935 master-0 kubenswrapper[31456]: I0312 21:11:41.928871 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 21:11:41.942355 master-0 kubenswrapper[31456]: I0312 21:11:41.942312 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"9baebd43802ad43590db1e487490f965bd4169e2568fac045b8a908ec15d0452"} Mar 12 21:11:41.942604 master-0 kubenswrapper[31456]: I0312 21:11:41.942586 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"edf1117d31fcf7f4d17e7f082940c81dc41c41b9a4e92716e4335c74df9b921b"} Mar 12 21:11:41.944509 master-0 kubenswrapper[31456]: I0312 21:11:41.944451 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" event={"ID":"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5","Type":"ContainerStarted","Data":"d6e75fa7d9e7f2cc1a9c978e093586be49dbd4a358b29fbdfbace4095cb47fb0"} Mar 12 21:11:41.944651 master-0 kubenswrapper[31456]: I0312 21:11:41.944630 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" event={"ID":"ccb03070-75ac-4cc7-9213-9a35d4e3f1c5","Type":"ContainerStarted","Data":"99a4dad4976d1d9975dfeb11094e8b2695f1b24d24863c27f5a8f564a8c4b077"} Mar 12 21:11:41.953827 master-0 kubenswrapper[31456]: I0312 21:11:41.953783 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 21:11:41.991974 master-0 kubenswrapper[31456]: I0312 21:11:41.991881 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" podStartSLOduration=2.991860139 podStartE2EDuration="2.991860139s" podCreationTimestamp="2026-03-12 21:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:11:41.976465904 +0000 UTC m=+163.051071242" watchObservedRunningTime="2026-03-12 21:11:41.991860139 +0000 UTC m=+163.066465487" Mar 12 21:11:42.008971 master-0 kubenswrapper[31456]: I0312 21:11:42.008913 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 21:11:42.015604 master-0 kubenswrapper[31456]: I0312 21:11:42.015570 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 21:11:42.022671 master-0 kubenswrapper[31456]: I0312 21:11:42.022557 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 21:11:42.262430 master-0 kubenswrapper[31456]: I0312 21:11:42.262327 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:11:42.310664 master-0 kubenswrapper[31456]: I0312 21:11:42.310613 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 21:11:42.317573 master-0 kubenswrapper[31456]: I0312 21:11:42.317543 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 21:11:42.356721 master-0 kubenswrapper[31456]: I0312 21:11:42.356672 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 21:11:42.549996 master-0 kubenswrapper[31456]: I0312 21:11:42.549914 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 21:11:42.907871 master-0 kubenswrapper[31456]: I0312 21:11:42.907791 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 21:11:42.957624 master-0 kubenswrapper[31456]: I0312 21:11:42.957544 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"c31086f6cd04e4f565b6f3af5fc1a44308182b7a0f6a94fbddf366b90535ad4d"} Mar 12 21:11:42.958482 master-0 kubenswrapper[31456]: I0312 21:11:42.957618 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"3ec6c3485ac170780b40a3c6907496cee6333908ab4e864ff42186cc2d6f96e7"} Mar 12 21:11:42.975488 master-0 kubenswrapper[31456]: I0312 21:11:42.975395 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 21:11:43.021179 master-0 kubenswrapper[31456]: I0312 21:11:43.021123 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 21:11:43.515070 master-0 kubenswrapper[31456]: I0312 21:11:43.514936 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 21:11:43.525476 master-0 kubenswrapper[31456]: I0312 21:11:43.525427 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 21:11:43.817157 master-0 kubenswrapper[31456]: I0312 21:11:43.817093 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 21:11:43.817308 master-0 kubenswrapper[31456]: I0312 21:11:43.817185 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:11:43.876398 master-0 kubenswrapper[31456]: I0312 21:11:43.876304 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 21:11:43.876680 master-0 kubenswrapper[31456]: I0312 21:11:43.876510 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 21:11:43.876680 master-0 kubenswrapper[31456]: I0312 21:11:43.876569 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 21:11:43.876680 master-0 kubenswrapper[31456]: I0312 21:11:43.876644 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests" (OuterVolumeSpecName: "manifests") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:43.876982 master-0 kubenswrapper[31456]: I0312 21:11:43.876678 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 21:11:43.876982 master-0 kubenswrapper[31456]: I0312 21:11:43.876711 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log" (OuterVolumeSpecName: "var-log") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:43.876982 master-0 kubenswrapper[31456]: I0312 21:11:43.876809 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 12 21:11:43.876982 master-0 kubenswrapper[31456]: I0312 21:11:43.876882 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:43.876982 master-0 kubenswrapper[31456]: I0312 21:11:43.876889 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:43.877379 master-0 kubenswrapper[31456]: I0312 21:11:43.877336 31456 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:43.877379 master-0 kubenswrapper[31456]: I0312 21:11:43.877375 31456 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:43.877524 master-0 kubenswrapper[31456]: I0312 21:11:43.877395 31456 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:43.877524 master-0 kubenswrapper[31456]: I0312 21:11:43.877415 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:43.884710 master-0 kubenswrapper[31456]: I0312 21:11:43.884625 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:11:43.968664 master-0 kubenswrapper[31456]: I0312 21:11:43.968345 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 12 21:11:43.968664 master-0 kubenswrapper[31456]: I0312 21:11:43.968468 31456 generic.go:334] "Generic (PLEG): container finished" podID="3a18cac8a90d6913a6a0391d805cddc9" containerID="5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765" exitCode=137 Mar 12 21:11:43.968664 master-0 kubenswrapper[31456]: I0312 21:11:43.968590 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:11:43.968664 master-0 kubenswrapper[31456]: I0312 21:11:43.968619 31456 scope.go:117] "RemoveContainer" containerID="5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765" Mar 12 21:11:43.977932 master-0 kubenswrapper[31456]: I0312 21:11:43.976796 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" event={"ID":"521ea6ff-1c6e-4633-8ded-b0ba87ab72b2","Type":"ContainerStarted","Data":"8d347aa327bc9b9e2b9b05de4cf3d13a64c926e33e8387eb727476f6557a68e5"} Mar 12 21:11:43.977932 master-0 kubenswrapper[31456]: I0312 21:11:43.977193 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:43.978820 master-0 kubenswrapper[31456]: I0312 21:11:43.978745 31456 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:11:43.997552 master-0 kubenswrapper[31456]: I0312 21:11:43.997470 31456 scope.go:117] "RemoveContainer" containerID="5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765" Mar 12 21:11:43.998644 master-0 kubenswrapper[31456]: E0312 21:11:43.998550 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765\": container with ID starting with 5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765 not found: ID does not exist" containerID="5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765" Mar 12 21:11:43.998992 master-0 kubenswrapper[31456]: I0312 21:11:43.998639 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765"} err="failed to get container status \"5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765\": rpc error: code = NotFound desc = could not find container \"5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765\": container with ID starting with 5602108289e0d6fb3ee47faac9e66b04faa7f735fa58b959b3373d903df0c765 not found: ID does not exist" Mar 12 21:11:44.029518 master-0 kubenswrapper[31456]: I0312 21:11:44.029327 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" podStartSLOduration=2.611981803 podStartE2EDuration="7.029302327s" podCreationTimestamp="2026-03-12 21:11:37 +0000 UTC" firstStartedPulling="2026-03-12 21:11:38.037201412 +0000 UTC m=+159.111806730" lastFinishedPulling="2026-03-12 21:11:42.454521886 +0000 UTC m=+163.529127254" observedRunningTime="2026-03-12 21:11:44.020122003 +0000 UTC m=+165.094727381" watchObservedRunningTime="2026-03-12 21:11:44.029302327 +0000 UTC m=+165.103907695" Mar 12 21:11:45.023226 master-0 kubenswrapper[31456]: I0312 21:11:45.023034 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 21:11:45.185036 master-0 kubenswrapper[31456]: I0312 21:11:45.184978 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a18cac8a90d6913a6a0391d805cddc9" path="/var/lib/kubelet/pods/3a18cac8a90d6913a6a0391d805cddc9/volumes" Mar 12 21:11:47.601837 master-0 kubenswrapper[31456]: I0312 21:11:47.601750 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-79fcdfff7b-hh7d4" Mar 12 21:11:56.476774 master-0 kubenswrapper[31456]: I0312 21:11:56.476629 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-crszq"] Mar 12 21:11:56.478214 master-0 kubenswrapper[31456]: I0312 21:11:56.478145 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.482898 master-0 kubenswrapper[31456]: I0312 21:11:56.482840 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5m6kx" Mar 12 21:11:56.483325 master-0 kubenswrapper[31456]: I0312 21:11:56.483270 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 21:11:56.502357 master-0 kubenswrapper[31456]: I0312 21:11:56.502281 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13f90427-1743-40ac-a1d3-7f945027d76e-host\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.502597 master-0 kubenswrapper[31456]: I0312 21:11:56.502514 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8v2\" (UniqueName: \"kubernetes.io/projected/13f90427-1743-40ac-a1d3-7f945027d76e-kube-api-access-ch8v2\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.502922 master-0 kubenswrapper[31456]: I0312 21:11:56.502625 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13f90427-1743-40ac-a1d3-7f945027d76e-serviceca\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.604659 master-0 kubenswrapper[31456]: I0312 21:11:56.604579 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13f90427-1743-40ac-a1d3-7f945027d76e-host\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.604917 master-0 kubenswrapper[31456]: I0312 21:11:56.604709 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8v2\" (UniqueName: \"kubernetes.io/projected/13f90427-1743-40ac-a1d3-7f945027d76e-kube-api-access-ch8v2\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.604917 master-0 kubenswrapper[31456]: I0312 21:11:56.604711 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13f90427-1743-40ac-a1d3-7f945027d76e-host\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.604917 master-0 kubenswrapper[31456]: I0312 21:11:56.604848 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13f90427-1743-40ac-a1d3-7f945027d76e-serviceca\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.605639 master-0 kubenswrapper[31456]: I0312 21:11:56.605593 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/13f90427-1743-40ac-a1d3-7f945027d76e-serviceca\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.626646 master-0 kubenswrapper[31456]: I0312 21:11:56.626580 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8v2\" (UniqueName: \"kubernetes.io/projected/13f90427-1743-40ac-a1d3-7f945027d76e-kube-api-access-ch8v2\") pod \"node-ca-crszq\" (UID: \"13f90427-1743-40ac-a1d3-7f945027d76e\") " pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.814780 master-0 kubenswrapper[31456]: I0312 21:11:56.814704 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-crszq" Mar 12 21:11:56.847313 master-0 kubenswrapper[31456]: W0312 21:11:56.847218 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13f90427_1743_40ac_a1d3_7f945027d76e.slice/crio-b21820f80dcd855758c800e84e8c4b78e3c17436ad91fe790b057bd8c8ea8850 WatchSource:0}: Error finding container b21820f80dcd855758c800e84e8c4b78e3c17436ad91fe790b057bd8c8ea8850: Status 404 returned error can't find the container with id b21820f80dcd855758c800e84e8c4b78e3c17436ad91fe790b057bd8c8ea8850 Mar 12 21:11:57.105747 master-0 kubenswrapper[31456]: I0312 21:11:57.105531 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-crszq" event={"ID":"13f90427-1743-40ac-a1d3-7f945027d76e","Type":"ContainerStarted","Data":"b21820f80dcd855758c800e84e8c4b78e3c17436ad91fe790b057bd8c8ea8850"} Mar 12 21:12:00.135220 master-0 kubenswrapper[31456]: I0312 21:12:00.135130 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-crszq" event={"ID":"13f90427-1743-40ac-a1d3-7f945027d76e","Type":"ContainerStarted","Data":"6b5f8f6277128f8c7f3448eef80b9b4539ca442e84c6e8992e3906858656d0ea"} Mar 12 21:12:00.168837 master-0 kubenswrapper[31456]: I0312 21:12:00.168677 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-crszq" podStartSLOduration=1.9425289719999999 podStartE2EDuration="4.168648146s" podCreationTimestamp="2026-03-12 21:11:56 +0000 UTC" firstStartedPulling="2026-03-12 21:11:56.850197051 +0000 UTC m=+177.924802419" lastFinishedPulling="2026-03-12 21:11:59.076316265 +0000 UTC m=+180.150921593" observedRunningTime="2026-03-12 21:12:00.164357941 +0000 UTC m=+181.238963359" watchObservedRunningTime="2026-03-12 21:12:00.168648146 +0000 UTC m=+181.243253514" Mar 12 21:12:00.204989 master-0 kubenswrapper[31456]: I0312 21:12:00.204918 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:12:00.205432 master-0 kubenswrapper[31456]: I0312 21:12:00.205398 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:12:20.230230 master-0 kubenswrapper[31456]: I0312 21:12:20.230115 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:12:20.239203 master-0 kubenswrapper[31456]: I0312 21:12:20.239133 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6ccccb478b-5r76x" Mar 12 21:12:35.393654 master-0 kubenswrapper[31456]: I0312 21:12:35.393586 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s6flb"] Mar 12 21:12:35.397611 master-0 kubenswrapper[31456]: I0312 21:12:35.397553 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.400655 master-0 kubenswrapper[31456]: I0312 21:12:35.400611 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 12 21:12:35.400854 master-0 kubenswrapper[31456]: I0312 21:12:35.400785 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-jfnzs" Mar 12 21:12:35.438172 master-0 kubenswrapper[31456]: I0312 21:12:35.437541 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/66a747ac-6702-47d8-b2e5-a7d9ad827732-ready\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.438172 master-0 kubenswrapper[31456]: I0312 21:12:35.437627 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66a747ac-6702-47d8-b2e5-a7d9ad827732-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.438172 master-0 kubenswrapper[31456]: I0312 21:12:35.437673 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66a747ac-6702-47d8-b2e5-a7d9ad827732-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.438172 master-0 kubenswrapper[31456]: I0312 21:12:35.437688 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zp8\" (UniqueName: \"kubernetes.io/projected/66a747ac-6702-47d8-b2e5-a7d9ad827732-kube-api-access-64zp8\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.538753 master-0 kubenswrapper[31456]: I0312 21:12:35.538684 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66a747ac-6702-47d8-b2e5-a7d9ad827732-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.538753 master-0 kubenswrapper[31456]: I0312 21:12:35.538732 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zp8\" (UniqueName: \"kubernetes.io/projected/66a747ac-6702-47d8-b2e5-a7d9ad827732-kube-api-access-64zp8\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.539135 master-0 kubenswrapper[31456]: I0312 21:12:35.539104 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/66a747ac-6702-47d8-b2e5-a7d9ad827732-ready\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.539438 master-0 kubenswrapper[31456]: I0312 21:12:35.539393 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66a747ac-6702-47d8-b2e5-a7d9ad827732-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.539489 master-0 kubenswrapper[31456]: I0312 21:12:35.539429 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66a747ac-6702-47d8-b2e5-a7d9ad827732-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.539489 master-0 kubenswrapper[31456]: I0312 21:12:35.539413 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/66a747ac-6702-47d8-b2e5-a7d9ad827732-ready\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.539628 master-0 kubenswrapper[31456]: I0312 21:12:35.539592 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66a747ac-6702-47d8-b2e5-a7d9ad827732-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.562540 master-0 kubenswrapper[31456]: I0312 21:12:35.562480 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zp8\" (UniqueName: \"kubernetes.io/projected/66a747ac-6702-47d8-b2e5-a7d9ad827732-kube-api-access-64zp8\") pod \"cni-sysctl-allowlist-ds-s6flb\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.724665 master-0 kubenswrapper[31456]: I0312 21:12:35.724530 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:35.764175 master-0 kubenswrapper[31456]: W0312 21:12:35.764086 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a747ac_6702_47d8_b2e5_a7d9ad827732.slice/crio-05f4f14ed0baadf172821610677e739c4e402a42a643372d5a67655c12f69617 WatchSource:0}: Error finding container 05f4f14ed0baadf172821610677e739c4e402a42a643372d5a67655c12f69617: Status 404 returned error can't find the container with id 05f4f14ed0baadf172821610677e739c4e402a42a643372d5a67655c12f69617 Mar 12 21:12:36.462473 master-0 kubenswrapper[31456]: I0312 21:12:36.462379 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" event={"ID":"66a747ac-6702-47d8-b2e5-a7d9ad827732","Type":"ContainerStarted","Data":"64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576"} Mar 12 21:12:36.462473 master-0 kubenswrapper[31456]: I0312 21:12:36.462462 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" event={"ID":"66a747ac-6702-47d8-b2e5-a7d9ad827732","Type":"ContainerStarted","Data":"05f4f14ed0baadf172821610677e739c4e402a42a643372d5a67655c12f69617"} Mar 12 21:12:36.463621 master-0 kubenswrapper[31456]: I0312 21:12:36.462674 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:36.498393 master-0 kubenswrapper[31456]: I0312 21:12:36.498254 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" podStartSLOduration=1.498216878 podStartE2EDuration="1.498216878s" podCreationTimestamp="2026-03-12 21:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:12:36.492499658 +0000 UTC m=+217.567105046" watchObservedRunningTime="2026-03-12 21:12:36.498216878 +0000 UTC m=+217.572822306" Mar 12 21:12:36.561851 master-0 kubenswrapper[31456]: I0312 21:12:36.556880 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:12:36.563334 master-0 kubenswrapper[31456]: I0312 21:12:36.562500 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.565240 master-0 kubenswrapper[31456]: I0312 21:12:36.564971 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 21:12:36.565240 master-0 kubenswrapper[31456]: I0312 21:12:36.565054 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 21:12:36.565444 master-0 kubenswrapper[31456]: I0312 21:12:36.565282 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 21:12:36.565444 master-0 kubenswrapper[31456]: I0312 21:12:36.565342 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 21:12:36.565444 master-0 kubenswrapper[31456]: I0312 21:12:36.565419 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 21:12:36.565944 master-0 kubenswrapper[31456]: I0312 21:12:36.565891 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 21:12:36.582727 master-0 kubenswrapper[31456]: I0312 21:12:36.582677 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 21:12:36.586417 master-0 kubenswrapper[31456]: I0312 21:12:36.586371 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 21:12:36.587511 master-0 kubenswrapper[31456]: I0312 21:12:36.587449 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:12:36.655038 master-0 kubenswrapper[31456]: I0312 21:12:36.654969 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655247 master-0 kubenswrapper[31456]: I0312 21:12:36.655057 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-out\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655247 master-0 kubenswrapper[31456]: I0312 21:12:36.655091 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655247 master-0 kubenswrapper[31456]: I0312 21:12:36.655119 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655247 master-0 kubenswrapper[31456]: I0312 21:12:36.655143 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655247 master-0 kubenswrapper[31456]: I0312 21:12:36.655181 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-web-config\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655407 master-0 kubenswrapper[31456]: I0312 21:12:36.655317 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-volume\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655440 master-0 kubenswrapper[31456]: I0312 21:12:36.655422 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655546 master-0 kubenswrapper[31456]: I0312 21:12:36.655503 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655607 master-0 kubenswrapper[31456]: I0312 21:12:36.655578 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655710 master-0 kubenswrapper[31456]: I0312 21:12:36.655679 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzk5p\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-kube-api-access-gzk5p\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.655767 master-0 kubenswrapper[31456]: I0312 21:12:36.655715 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757323 master-0 kubenswrapper[31456]: I0312 21:12:36.757183 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757323 master-0 kubenswrapper[31456]: I0312 21:12:36.757244 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757323 master-0 kubenswrapper[31456]: I0312 21:12:36.757270 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757323 master-0 kubenswrapper[31456]: I0312 21:12:36.757292 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-web-config\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757323 master-0 kubenswrapper[31456]: I0312 21:12:36.757314 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-volume\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757721 master-0 kubenswrapper[31456]: I0312 21:12:36.757348 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757721 master-0 kubenswrapper[31456]: E0312 21:12:36.757586 31456 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 12 21:12:36.757721 master-0 kubenswrapper[31456]: I0312 21:12:36.757630 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757721 master-0 kubenswrapper[31456]: I0312 21:12:36.757672 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757721 master-0 kubenswrapper[31456]: I0312 21:12:36.757706 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzk5p\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-kube-api-access-gzk5p\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.757980 master-0 kubenswrapper[31456]: E0312 21:12:36.757754 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls podName:c3679eeb-ec01-49e3-9049-faf3f0235ea0 nodeName:}" failed. No retries permitted until 2026-03-12 21:12:37.257718983 +0000 UTC m=+218.332324341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0") : secret "alertmanager-main-tls" not found Mar 12 21:12:36.757980 master-0 kubenswrapper[31456]: I0312 21:12:36.757801 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.758159 master-0 kubenswrapper[31456]: I0312 21:12:36.757991 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.758159 master-0 kubenswrapper[31456]: I0312 21:12:36.758096 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-out\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.758626 master-0 kubenswrapper[31456]: I0312 21:12:36.758595 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.759633 master-0 kubenswrapper[31456]: I0312 21:12:36.758981 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.759633 master-0 kubenswrapper[31456]: I0312 21:12:36.759228 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.762492 master-0 kubenswrapper[31456]: I0312 21:12:36.762442 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.763657 master-0 kubenswrapper[31456]: I0312 21:12:36.763608 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.763720 master-0 kubenswrapper[31456]: I0312 21:12:36.763649 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-web-config\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.764193 master-0 kubenswrapper[31456]: I0312 21:12:36.764142 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-out\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.765467 master-0 kubenswrapper[31456]: I0312 21:12:36.765426 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-volume\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.765993 master-0 kubenswrapper[31456]: I0312 21:12:36.765956 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.766679 master-0 kubenswrapper[31456]: I0312 21:12:36.766642 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:36.782047 master-0 kubenswrapper[31456]: I0312 21:12:36.781996 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzk5p\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-kube-api-access-gzk5p\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:37.263721 master-0 kubenswrapper[31456]: I0312 21:12:37.263636 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:37.264456 master-0 kubenswrapper[31456]: E0312 21:12:37.264419 31456 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 12 21:12:37.264530 master-0 kubenswrapper[31456]: E0312 21:12:37.264464 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls podName:c3679eeb-ec01-49e3-9049-faf3f0235ea0 nodeName:}" failed. No retries permitted until 2026-03-12 21:12:38.264451836 +0000 UTC m=+219.339057164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0") : secret "alertmanager-main-tls" not found Mar 12 21:12:37.509688 master-0 kubenswrapper[31456]: I0312 21:12:37.508650 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:12:38.292030 master-0 kubenswrapper[31456]: I0312 21:12:38.291931 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:38.292330 master-0 kubenswrapper[31456]: E0312 21:12:38.292190 31456 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 12 21:12:38.292330 master-0 kubenswrapper[31456]: E0312 21:12:38.292271 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls podName:c3679eeb-ec01-49e3-9049-faf3f0235ea0 nodeName:}" failed. No retries permitted until 2026-03-12 21:12:40.292244951 +0000 UTC m=+221.366850319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0") : secret "alertmanager-main-tls" not found Mar 12 21:12:38.387150 master-0 kubenswrapper[31456]: I0312 21:12:38.387065 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s6flb"] Mar 12 21:12:39.486228 master-0 kubenswrapper[31456]: I0312 21:12:39.486074 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" gracePeriod=30 Mar 12 21:12:40.181683 master-0 kubenswrapper[31456]: I0312 21:12:40.181603 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-68cf7597fb-d9f9b"] Mar 12 21:12:40.183993 master-0 kubenswrapper[31456]: I0312 21:12:40.183946 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.187300 master-0 kubenswrapper[31456]: I0312 21:12:40.187241 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 21:12:40.187843 master-0 kubenswrapper[31456]: I0312 21:12:40.187765 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 21:12:40.188228 master-0 kubenswrapper[31456]: I0312 21:12:40.188180 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 21:12:40.189196 master-0 kubenswrapper[31456]: I0312 21:12:40.189148 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 21:12:40.189740 master-0 kubenswrapper[31456]: I0312 21:12:40.189698 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 21:12:40.195860 master-0 kubenswrapper[31456]: I0312 21:12:40.195750 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 21:12:40.210372 master-0 kubenswrapper[31456]: I0312 21:12:40.210310 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-68cf7597fb-d9f9b"] Mar 12 21:12:40.227002 master-0 kubenswrapper[31456]: I0312 21:12:40.226912 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8p86\" (UniqueName: \"kubernetes.io/projected/7c2c44ec-bacf-4550-80aa-448a3a9955b3-kube-api-access-j8p86\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.227165 master-0 kubenswrapper[31456]: I0312 21:12:40.227093 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-secret-telemeter-client\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.227165 master-0 kubenswrapper[31456]: I0312 21:12:40.227149 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.227264 master-0 kubenswrapper[31456]: I0312 21:12:40.227186 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-federate-client-tls\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.227381 master-0 kubenswrapper[31456]: I0312 21:12:40.227341 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.227436 master-0 kubenswrapper[31456]: I0312 21:12:40.227407 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-serving-certs-ca-bundle\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.228023 master-0 kubenswrapper[31456]: I0312 21:12:40.227893 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-telemeter-client-tls\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.228097 master-0 kubenswrapper[31456]: I0312 21:12:40.228066 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-metrics-client-ca\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329396 master-0 kubenswrapper[31456]: I0312 21:12:40.329287 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329518 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-serving-certs-ca-bundle\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329584 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-telemeter-client-tls\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329618 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-metrics-client-ca\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329649 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8p86\" (UniqueName: \"kubernetes.io/projected/7c2c44ec-bacf-4550-80aa-448a3a9955b3-kube-api-access-j8p86\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329682 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329719 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-secret-telemeter-client\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.329734 master-0 kubenswrapper[31456]: I0312 21:12:40.329745 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-federate-client-tls\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.330430 master-0 kubenswrapper[31456]: I0312 21:12:40.329767 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.331159 master-0 kubenswrapper[31456]: I0312 21:12:40.331106 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-serving-certs-ca-bundle\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.331934 master-0 kubenswrapper[31456]: I0312 21:12:40.331902 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-telemeter-trusted-ca-bundle\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.332902 master-0 kubenswrapper[31456]: I0312 21:12:40.332800 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c2c44ec-bacf-4550-80aa-448a3a9955b3-metrics-client-ca\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.335899 master-0 kubenswrapper[31456]: I0312 21:12:40.335848 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.338479 master-0 kubenswrapper[31456]: I0312 21:12:40.338394 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-secret-telemeter-client\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.338638 master-0 kubenswrapper[31456]: I0312 21:12:40.338482 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-telemeter-client-tls\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.338997 master-0 kubenswrapper[31456]: I0312 21:12:40.338945 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7c2c44ec-bacf-4550-80aa-448a3a9955b3-federate-client-tls\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.343302 master-0 kubenswrapper[31456]: I0312 21:12:40.343226 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:40.366942 master-0 kubenswrapper[31456]: I0312 21:12:40.366869 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8p86\" (UniqueName: \"kubernetes.io/projected/7c2c44ec-bacf-4550-80aa-448a3a9955b3-kube-api-access-j8p86\") pod \"telemeter-client-68cf7597fb-d9f9b\" (UID: \"7c2c44ec-bacf-4550-80aa-448a3a9955b3\") " pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:40.489059 master-0 kubenswrapper[31456]: I0312 21:12:40.488875 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:12:40.511957 master-0 kubenswrapper[31456]: I0312 21:12:40.511874 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" Mar 12 21:12:41.007280 master-0 kubenswrapper[31456]: I0312 21:12:41.006966 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:12:41.014318 master-0 kubenswrapper[31456]: W0312 21:12:41.014170 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3679eeb_ec01_49e3_9049_faf3f0235ea0.slice/crio-0721a9f0f4a2cf837622984b433d4b7055403c71a199e65fcd75b5a697481acb WatchSource:0}: Error finding container 0721a9f0f4a2cf837622984b433d4b7055403c71a199e65fcd75b5a697481acb: Status 404 returned error can't find the container with id 0721a9f0f4a2cf837622984b433d4b7055403c71a199e65fcd75b5a697481acb Mar 12 21:12:41.109744 master-0 kubenswrapper[31456]: I0312 21:12:41.109661 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-68cf7597fb-d9f9b"] Mar 12 21:12:41.110428 master-0 kubenswrapper[31456]: W0312 21:12:41.110215 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2c44ec_bacf_4550_80aa_448a3a9955b3.slice/crio-67411b842d1f8c91e783b7f8582b7eb6b49a872b82efbe537f42ee298c3c9b29 WatchSource:0}: Error finding container 67411b842d1f8c91e783b7f8582b7eb6b49a872b82efbe537f42ee298c3c9b29: Status 404 returned error can't find the container with id 67411b842d1f8c91e783b7f8582b7eb6b49a872b82efbe537f42ee298c3c9b29 Mar 12 21:12:41.513048 master-0 kubenswrapper[31456]: I0312 21:12:41.512974 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" event={"ID":"7c2c44ec-bacf-4550-80aa-448a3a9955b3","Type":"ContainerStarted","Data":"67411b842d1f8c91e783b7f8582b7eb6b49a872b82efbe537f42ee298c3c9b29"} Mar 12 21:12:41.514796 master-0 kubenswrapper[31456]: I0312 21:12:41.514745 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"0721a9f0f4a2cf837622984b433d4b7055403c71a199e65fcd75b5a697481acb"} Mar 12 21:12:43.553755 master-0 kubenswrapper[31456]: I0312 21:12:43.553691 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="a94f9e91adee74d6313ee6b5492bf9a1186acae682e549d2e594a4cf90cc1041" exitCode=0 Mar 12 21:12:43.553755 master-0 kubenswrapper[31456]: I0312 21:12:43.553761 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"a94f9e91adee74d6313ee6b5492bf9a1186acae682e549d2e594a4cf90cc1041"} Mar 12 21:12:43.557859 master-0 kubenswrapper[31456]: I0312 21:12:43.557761 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" event={"ID":"7c2c44ec-bacf-4550-80aa-448a3a9955b3","Type":"ContainerStarted","Data":"350eb5cb8e81bfe4369b065356ef85a3ee9f6e54bc657c36cf94d6da03e3ee84"} Mar 12 21:12:44.560961 master-0 kubenswrapper[31456]: I0312 21:12:44.560499 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x"] Mar 12 21:12:44.580670 master-0 kubenswrapper[31456]: I0312 21:12:44.580512 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.583641 master-0 kubenswrapper[31456]: I0312 21:12:44.583588 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x"] Mar 12 21:12:44.589002 master-0 kubenswrapper[31456]: I0312 21:12:44.588834 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" event={"ID":"7c2c44ec-bacf-4550-80aa-448a3a9955b3","Type":"ContainerStarted","Data":"92343a293fdb68320c42bb0bc29e38320857364bac973c63176699130210e9c7"} Mar 12 21:12:44.589002 master-0 kubenswrapper[31456]: I0312 21:12:44.588972 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" event={"ID":"7c2c44ec-bacf-4550-80aa-448a3a9955b3","Type":"ContainerStarted","Data":"f93dfbb0ca7451bc358722107ea8db0a8b3f58d18912f0b1c5852dd7ab447f19"} Mar 12 21:12:44.641010 master-0 kubenswrapper[31456]: I0312 21:12:44.640934 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-68cf7597fb-d9f9b" podStartSLOduration=2.416280515 podStartE2EDuration="4.640906892s" podCreationTimestamp="2026-03-12 21:12:40 +0000 UTC" firstStartedPulling="2026-03-12 21:12:41.113283761 +0000 UTC m=+222.187889129" lastFinishedPulling="2026-03-12 21:12:43.337910148 +0000 UTC m=+224.412515506" observedRunningTime="2026-03-12 21:12:44.633135303 +0000 UTC m=+225.707740631" watchObservedRunningTime="2026-03-12 21:12:44.640906892 +0000 UTC m=+225.715512230" Mar 12 21:12:44.724544 master-0 kubenswrapper[31456]: I0312 21:12:44.724455 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45v5n\" (UniqueName: \"kubernetes.io/projected/cc9afc10-a153-4bca-a4b1-887ced079158-kube-api-access-45v5n\") pod \"multus-admission-controller-56bbfd46b8-fvq8x\" (UID: \"cc9afc10-a153-4bca-a4b1-887ced079158\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.725163 master-0 kubenswrapper[31456]: I0312 21:12:44.725121 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc9afc10-a153-4bca-a4b1-887ced079158-webhook-certs\") pod \"multus-admission-controller-56bbfd46b8-fvq8x\" (UID: \"cc9afc10-a153-4bca-a4b1-887ced079158\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.825982 master-0 kubenswrapper[31456]: I0312 21:12:44.825937 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc9afc10-a153-4bca-a4b1-887ced079158-webhook-certs\") pod \"multus-admission-controller-56bbfd46b8-fvq8x\" (UID: \"cc9afc10-a153-4bca-a4b1-887ced079158\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.825982 master-0 kubenswrapper[31456]: I0312 21:12:44.825985 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45v5n\" (UniqueName: \"kubernetes.io/projected/cc9afc10-a153-4bca-a4b1-887ced079158-kube-api-access-45v5n\") pod \"multus-admission-controller-56bbfd46b8-fvq8x\" (UID: \"cc9afc10-a153-4bca-a4b1-887ced079158\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.829653 master-0 kubenswrapper[31456]: I0312 21:12:44.829432 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc9afc10-a153-4bca-a4b1-887ced079158-webhook-certs\") pod \"multus-admission-controller-56bbfd46b8-fvq8x\" (UID: \"cc9afc10-a153-4bca-a4b1-887ced079158\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.843827 master-0 kubenswrapper[31456]: I0312 21:12:44.839914 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45v5n\" (UniqueName: \"kubernetes.io/projected/cc9afc10-a153-4bca-a4b1-887ced079158-kube-api-access-45v5n\") pod \"multus-admission-controller-56bbfd46b8-fvq8x\" (UID: \"cc9afc10-a153-4bca-a4b1-887ced079158\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:44.926921 master-0 kubenswrapper[31456]: I0312 21:12:44.926852 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" Mar 12 21:12:45.376021 master-0 kubenswrapper[31456]: I0312 21:12:45.375895 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x"] Mar 12 21:12:45.608654 master-0 kubenswrapper[31456]: I0312 21:12:45.608602 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" event={"ID":"cc9afc10-a153-4bca-a4b1-887ced079158","Type":"ContainerStarted","Data":"e988b6598e606d179a4f8da94aabeb4c39d5b88a43943acf7304b18ccc108b45"} Mar 12 21:12:45.728387 master-0 kubenswrapper[31456]: E0312 21:12:45.728323 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:12:45.729691 master-0 kubenswrapper[31456]: E0312 21:12:45.729630 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:12:45.732010 master-0 kubenswrapper[31456]: E0312 21:12:45.731944 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:12:45.732010 master-0 kubenswrapper[31456]: E0312 21:12:45.732003 31456 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" Mar 12 21:12:46.622037 master-0 kubenswrapper[31456]: I0312 21:12:46.621970 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" event={"ID":"cc9afc10-a153-4bca-a4b1-887ced079158","Type":"ContainerStarted","Data":"e9274dc67389fe1dc61577747e4b24047621bc78566f7d6bc7b886981bb86577"} Mar 12 21:12:46.622536 master-0 kubenswrapper[31456]: I0312 21:12:46.622052 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" event={"ID":"cc9afc10-a153-4bca-a4b1-887ced079158","Type":"ContainerStarted","Data":"ba4ad5622826cf7fe4574039c9bd390a5969fbceb9727a1a39e64275d04eb508"} Mar 12 21:12:46.628819 master-0 kubenswrapper[31456]: I0312 21:12:46.628745 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"da24a5560c15bfee8ffdf7a4acad8f836842312957495c1f48a1070c34da3077"} Mar 12 21:12:46.628896 master-0 kubenswrapper[31456]: I0312 21:12:46.628802 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"aba40a7cf66ca44db97861ee95162afacf7ae3a9ad8a925702f2cde614084862"} Mar 12 21:12:46.628896 master-0 kubenswrapper[31456]: I0312 21:12:46.628837 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"ad0441949003a38500f5ae34066530abfc6fc47dcf400d66fda34d620bf71c3c"} Mar 12 21:12:46.628896 master-0 kubenswrapper[31456]: I0312 21:12:46.628850 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"880d7627641637fe5690f2cb679214e1b7fa5c600afc231ae075e4f697a24048"} Mar 12 21:12:46.628896 master-0 kubenswrapper[31456]: I0312 21:12:46.628862 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"847509df23dc5f0cd65487a561c834039e5719dbd9aadb73ca1712a834ccf8ce"} Mar 12 21:12:46.649453 master-0 kubenswrapper[31456]: I0312 21:12:46.649383 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fvq8x" podStartSLOduration=2.6493697320000003 podStartE2EDuration="2.649369732s" podCreationTimestamp="2026-03-12 21:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:12:46.646924082 +0000 UTC m=+227.721529420" watchObservedRunningTime="2026-03-12 21:12:46.649369732 +0000 UTC m=+227.723975070" Mar 12 21:12:46.694966 master-0 kubenswrapper[31456]: I0312 21:12:46.694420 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-tgbjx"] Mar 12 21:12:46.694966 master-0 kubenswrapper[31456]: I0312 21:12:46.694742 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="multus-admission-controller" containerID="cri-o://0801412eec909b7451c3ea16fc183a3c0aa018264741173074d4a6d25bbb8e1c" gracePeriod=30 Mar 12 21:12:46.694966 master-0 kubenswrapper[31456]: I0312 21:12:46.694870 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="kube-rbac-proxy" containerID="cri-o://0217824df4e2de4a6e66903135737bb67e2b0fdba4f510dd20fc536aefc8d881" gracePeriod=30 Mar 12 21:12:47.003369 master-0 kubenswrapper[31456]: I0312 21:12:47.003241 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:12:47.006016 master-0 kubenswrapper[31456]: I0312 21:12:47.005990 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.009451 master-0 kubenswrapper[31456]: I0312 21:12:47.009359 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 21:12:47.009988 master-0 kubenswrapper[31456]: I0312 21:12:47.009543 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 21:12:47.009988 master-0 kubenswrapper[31456]: I0312 21:12:47.009778 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 21:12:47.009988 master-0 kubenswrapper[31456]: I0312 21:12:47.009899 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 21:12:47.010770 master-0 kubenswrapper[31456]: I0312 21:12:47.010414 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 21:12:47.010770 master-0 kubenswrapper[31456]: I0312 21:12:47.010546 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 21:12:47.010770 master-0 kubenswrapper[31456]: I0312 21:12:47.010647 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 21:12:47.010931 master-0 kubenswrapper[31456]: I0312 21:12:47.010904 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 21:12:47.013437 master-0 kubenswrapper[31456]: I0312 21:12:47.013370 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 21:12:47.014939 master-0 kubenswrapper[31456]: I0312 21:12:47.014514 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fvjb30sfen171" Mar 12 21:12:47.017729 master-0 kubenswrapper[31456]: I0312 21:12:47.017549 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 21:12:47.022727 master-0 kubenswrapper[31456]: I0312 21:12:47.022663 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 21:12:47.047785 master-0 kubenswrapper[31456]: I0312 21:12:47.047725 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:12:47.072941 master-0 kubenswrapper[31456]: I0312 21:12:47.072604 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.072941 master-0 kubenswrapper[31456]: I0312 21:12:47.072676 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.072941 master-0 kubenswrapper[31456]: I0312 21:12:47.072714 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.072941 master-0 kubenswrapper[31456]: I0312 21:12:47.072742 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.072941 master-0 kubenswrapper[31456]: I0312 21:12:47.072778 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.072941 master-0 kubenswrapper[31456]: I0312 21:12:47.072860 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073363 master-0 kubenswrapper[31456]: I0312 21:12:47.073042 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073363 master-0 kubenswrapper[31456]: I0312 21:12:47.073160 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073363 master-0 kubenswrapper[31456]: I0312 21:12:47.073188 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-web-config\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073363 master-0 kubenswrapper[31456]: I0312 21:12:47.073212 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt78m\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-kube-api-access-rt78m\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073363 master-0 kubenswrapper[31456]: I0312 21:12:47.073336 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073530 master-0 kubenswrapper[31456]: I0312 21:12:47.073443 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073530 master-0 kubenswrapper[31456]: I0312 21:12:47.073464 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073530 master-0 kubenswrapper[31456]: I0312 21:12:47.073492 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config-out\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073680 master-0 kubenswrapper[31456]: I0312 21:12:47.073543 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073680 master-0 kubenswrapper[31456]: I0312 21:12:47.073576 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073680 master-0 kubenswrapper[31456]: I0312 21:12:47.073624 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.073680 master-0 kubenswrapper[31456]: I0312 21:12:47.073643 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174074 master-0 kubenswrapper[31456]: I0312 21:12:47.174032 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174074 master-0 kubenswrapper[31456]: I0312 21:12:47.174075 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174102 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174125 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174150 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-web-config\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174195 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt78m\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-kube-api-access-rt78m\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174225 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174251 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174266 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174287 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config-out\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174303 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174316 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174341 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174357 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174371 master-0 kubenswrapper[31456]: I0312 21:12:47.174377 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174963 master-0 kubenswrapper[31456]: I0312 21:12:47.174395 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174963 master-0 kubenswrapper[31456]: I0312 21:12:47.174415 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.174963 master-0 kubenswrapper[31456]: I0312 21:12:47.174435 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.175189 master-0 kubenswrapper[31456]: I0312 21:12:47.175158 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.175840 master-0 kubenswrapper[31456]: I0312 21:12:47.175821 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.178765 master-0 kubenswrapper[31456]: I0312 21:12:47.178723 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.178969 master-0 kubenswrapper[31456]: I0312 21:12:47.178930 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.179448 master-0 kubenswrapper[31456]: I0312 21:12:47.179403 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.180414 master-0 kubenswrapper[31456]: I0312 21:12:47.180379 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.181859 master-0 kubenswrapper[31456]: I0312 21:12:47.181820 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.182098 master-0 kubenswrapper[31456]: I0312 21:12:47.182063 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.182798 master-0 kubenswrapper[31456]: I0312 21:12:47.182762 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.184347 master-0 kubenswrapper[31456]: I0312 21:12:47.183549 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.184347 master-0 kubenswrapper[31456]: I0312 21:12:47.184318 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.185734 master-0 kubenswrapper[31456]: I0312 21:12:47.185696 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config-out\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.189965 master-0 kubenswrapper[31456]: I0312 21:12:47.187447 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.189965 master-0 kubenswrapper[31456]: I0312 21:12:47.188017 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-web-config\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.190242 master-0 kubenswrapper[31456]: I0312 21:12:47.190058 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.201832 master-0 kubenswrapper[31456]: I0312 21:12:47.198602 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.201832 master-0 kubenswrapper[31456]: I0312 21:12:47.199701 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.202987 master-0 kubenswrapper[31456]: I0312 21:12:47.202635 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt78m\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-kube-api-access-rt78m\") pod \"prometheus-k8s-0\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.326321 master-0 kubenswrapper[31456]: I0312 21:12:47.326239 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:12:47.641939 master-0 kubenswrapper[31456]: I0312 21:12:47.641186 31456 generic.go:334] "Generic (PLEG): container finished" podID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerID="0217824df4e2de4a6e66903135737bb67e2b0fdba4f510dd20fc536aefc8d881" exitCode=0 Mar 12 21:12:47.641939 master-0 kubenswrapper[31456]: I0312 21:12:47.641314 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" event={"ID":"b8aa8296-ed9b-4b37-8ab4-791b1342140f","Type":"ContainerDied","Data":"0217824df4e2de4a6e66903135737bb67e2b0fdba4f510dd20fc536aefc8d881"} Mar 12 21:12:47.660859 master-0 kubenswrapper[31456]: I0312 21:12:47.654519 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerStarted","Data":"c4ed0960cf9bc2557dc0e5df8af9003d82bfa6fb1a701198446a2c35d692525b"} Mar 12 21:12:47.713794 master-0 kubenswrapper[31456]: I0312 21:12:47.713710 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=7.097937658 podStartE2EDuration="11.713688088s" podCreationTimestamp="2026-03-12 21:12:36 +0000 UTC" firstStartedPulling="2026-03-12 21:12:41.017579573 +0000 UTC m=+222.092184941" lastFinishedPulling="2026-03-12 21:12:45.633330033 +0000 UTC m=+226.707935371" observedRunningTime="2026-03-12 21:12:47.700581747 +0000 UTC m=+228.775187076" watchObservedRunningTime="2026-03-12 21:12:47.713688088 +0000 UTC m=+228.788293436" Mar 12 21:12:47.786747 master-0 kubenswrapper[31456]: I0312 21:12:47.786680 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:12:48.666446 master-0 kubenswrapper[31456]: I0312 21:12:48.666366 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" exitCode=0 Mar 12 21:12:48.667481 master-0 kubenswrapper[31456]: I0312 21:12:48.666458 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} Mar 12 21:12:48.667481 master-0 kubenswrapper[31456]: I0312 21:12:48.666535 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"805c5ce472b8ebbbff3055f2cefbf409beee3cad096e80242ec45b3f935c5084"} Mar 12 21:12:52.705223 master-0 kubenswrapper[31456]: I0312 21:12:52.705169 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} Mar 12 21:12:52.705223 master-0 kubenswrapper[31456]: I0312 21:12:52.705217 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} Mar 12 21:12:52.705223 master-0 kubenswrapper[31456]: I0312 21:12:52.705232 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} Mar 12 21:12:53.722057 master-0 kubenswrapper[31456]: I0312 21:12:53.721908 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} Mar 12 21:12:53.722891 master-0 kubenswrapper[31456]: I0312 21:12:53.722089 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} Mar 12 21:12:53.722891 master-0 kubenswrapper[31456]: I0312 21:12:53.722114 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerStarted","Data":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} Mar 12 21:12:53.795889 master-0 kubenswrapper[31456]: I0312 21:12:53.793275 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.38537103 podStartE2EDuration="7.793240538s" podCreationTimestamp="2026-03-12 21:12:46 +0000 UTC" firstStartedPulling="2026-03-12 21:12:48.668848139 +0000 UTC m=+229.743453507" lastFinishedPulling="2026-03-12 21:12:52.076717647 +0000 UTC m=+233.151323015" observedRunningTime="2026-03-12 21:12:53.779415581 +0000 UTC m=+234.854020999" watchObservedRunningTime="2026-03-12 21:12:53.793240538 +0000 UTC m=+234.867845906" Mar 12 21:12:55.727383 master-0 kubenswrapper[31456]: E0312 21:12:55.727290 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:12:55.729539 master-0 kubenswrapper[31456]: E0312 21:12:55.729473 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:12:55.732317 master-0 kubenswrapper[31456]: E0312 21:12:55.732153 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:12:55.732317 master-0 kubenswrapper[31456]: E0312 21:12:55.732256 31456 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" Mar 12 21:12:57.327229 master-0 kubenswrapper[31456]: I0312 21:12:57.327105 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:13:01.758301 master-0 kubenswrapper[31456]: I0312 21:13:01.758210 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5"] Mar 12 21:13:01.760778 master-0 kubenswrapper[31456]: I0312 21:13:01.760692 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:01.768347 master-0 kubenswrapper[31456]: I0312 21:13:01.768279 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 21:13:01.771080 master-0 kubenswrapper[31456]: I0312 21:13:01.770998 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 21:13:01.777388 master-0 kubenswrapper[31456]: I0312 21:13:01.777348 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5"] Mar 12 21:13:01.854115 master-0 kubenswrapper[31456]: I0312 21:13:01.854050 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/06b246c0-d552-483f-85f8-d16566b9eb30-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:01.854347 master-0 kubenswrapper[31456]: I0312 21:13:01.854140 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/06b246c0-d552-483f-85f8-d16566b9eb30-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:01.955479 master-0 kubenswrapper[31456]: I0312 21:13:01.955408 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/06b246c0-d552-483f-85f8-d16566b9eb30-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:01.955691 master-0 kubenswrapper[31456]: I0312 21:13:01.955530 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/06b246c0-d552-483f-85f8-d16566b9eb30-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:01.955691 master-0 kubenswrapper[31456]: E0312 21:13:01.955586 31456 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 12 21:13:01.955691 master-0 kubenswrapper[31456]: E0312 21:13:01.955664 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06b246c0-d552-483f-85f8-d16566b9eb30-networking-console-plugin-cert podName:06b246c0-d552-483f-85f8-d16566b9eb30 nodeName:}" failed. No retries permitted until 2026-03-12 21:13:02.455644505 +0000 UTC m=+243.530249853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/06b246c0-d552-483f-85f8-d16566b9eb30-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-tfbt5" (UID: "06b246c0-d552-483f-85f8-d16566b9eb30") : secret "networking-console-plugin-cert" not found Mar 12 21:13:01.956480 master-0 kubenswrapper[31456]: I0312 21:13:01.956440 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/06b246c0-d552-483f-85f8-d16566b9eb30-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:02.463244 master-0 kubenswrapper[31456]: I0312 21:13:02.463160 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/06b246c0-d552-483f-85f8-d16566b9eb30-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:02.467339 master-0 kubenswrapper[31456]: I0312 21:13:02.467284 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/06b246c0-d552-483f-85f8-d16566b9eb30-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-tfbt5\" (UID: \"06b246c0-d552-483f-85f8-d16566b9eb30\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:02.695606 master-0 kubenswrapper[31456]: I0312 21:13:02.695530 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" Mar 12 21:13:03.226169 master-0 kubenswrapper[31456]: I0312 21:13:03.226099 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5"] Mar 12 21:13:03.847753 master-0 kubenswrapper[31456]: I0312 21:13:03.847651 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" event={"ID":"06b246c0-d552-483f-85f8-d16566b9eb30","Type":"ContainerStarted","Data":"70adf2cd6e75229de5711dcaeec195f52f2bad39359154cbc4ac8842062b5409"} Mar 12 21:13:05.727358 master-0 kubenswrapper[31456]: E0312 21:13:05.727279 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:13:05.729168 master-0 kubenswrapper[31456]: E0312 21:13:05.729074 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:13:05.731209 master-0 kubenswrapper[31456]: E0312 21:13:05.731104 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 12 21:13:05.731366 master-0 kubenswrapper[31456]: E0312 21:13:05.731211 31456 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" Mar 12 21:13:05.874243 master-0 kubenswrapper[31456]: I0312 21:13:05.874162 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" event={"ID":"06b246c0-d552-483f-85f8-d16566b9eb30","Type":"ContainerStarted","Data":"5dea2adad9337957c0b0acbd3e4003757b8a62ac7b204678c00dd263010b961b"} Mar 12 21:13:05.905881 master-0 kubenswrapper[31456]: I0312 21:13:05.903474 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-tfbt5" podStartSLOduration=3.115925621 podStartE2EDuration="4.903449545s" podCreationTimestamp="2026-03-12 21:13:01 +0000 UTC" firstStartedPulling="2026-03-12 21:13:03.23122822 +0000 UTC m=+244.305833558" lastFinishedPulling="2026-03-12 21:13:05.018752154 +0000 UTC m=+246.093357482" observedRunningTime="2026-03-12 21:13:05.895321976 +0000 UTC m=+246.969927374" watchObservedRunningTime="2026-03-12 21:13:05.903449545 +0000 UTC m=+246.978054883" Mar 12 21:13:08.230336 master-0 kubenswrapper[31456]: I0312 21:13:08.230261 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 21:13:08.231405 master-0 kubenswrapper[31456]: I0312 21:13:08.231374 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.233927 master-0 kubenswrapper[31456]: I0312 21:13:08.233883 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-v74cb" Mar 12 21:13:08.235271 master-0 kubenswrapper[31456]: I0312 21:13:08.235233 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 21:13:08.248370 master-0 kubenswrapper[31456]: I0312 21:13:08.248323 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 21:13:08.279871 master-0 kubenswrapper[31456]: I0312 21:13:08.279799 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.280078 master-0 kubenswrapper[31456]: I0312 21:13:08.279890 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-var-lock\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.280078 master-0 kubenswrapper[31456]: I0312 21:13:08.280023 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58a6a80-48e7-428e-be7a-d81dfc726450-kube-api-access\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.381607 master-0 kubenswrapper[31456]: I0312 21:13:08.381537 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.381607 master-0 kubenswrapper[31456]: I0312 21:13:08.381586 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-var-lock\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.381933 master-0 kubenswrapper[31456]: I0312 21:13:08.381718 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.382029 master-0 kubenswrapper[31456]: I0312 21:13:08.381976 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-var-lock\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.382213 master-0 kubenswrapper[31456]: I0312 21:13:08.382164 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58a6a80-48e7-428e-be7a-d81dfc726450-kube-api-access\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.406577 master-0 kubenswrapper[31456]: I0312 21:13:08.406513 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58a6a80-48e7-428e-be7a-d81dfc726450-kube-api-access\") pod \"installer-5-master-0\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:08.557692 master-0 kubenswrapper[31456]: I0312 21:13:08.557514 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:09.111939 master-0 kubenswrapper[31456]: I0312 21:13:09.111105 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 12 21:13:09.624118 master-0 kubenswrapper[31456]: I0312 21:13:09.624073 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-s6flb_66a747ac-6702-47d8-b2e5-a7d9ad827732/kube-multus-additional-cni-plugins/0.log" Mar 12 21:13:09.624621 master-0 kubenswrapper[31456]: I0312 21:13:09.624170 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:13:09.653248 master-0 kubenswrapper[31456]: E0312 21:13:09.652963 31456 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a747ac_6702_47d8_b2e5_a7d9ad827732.slice/crio-conmon-64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a747ac_6702_47d8_b2e5_a7d9ad827732.slice/crio-64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:13:09.708311 master-0 kubenswrapper[31456]: I0312 21:13:09.708233 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66a747ac-6702-47d8-b2e5-a7d9ad827732-tuning-conf-dir\") pod \"66a747ac-6702-47d8-b2e5-a7d9ad827732\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " Mar 12 21:13:09.708530 master-0 kubenswrapper[31456]: I0312 21:13:09.708351 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66a747ac-6702-47d8-b2e5-a7d9ad827732-cni-sysctl-allowlist\") pod \"66a747ac-6702-47d8-b2e5-a7d9ad827732\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " Mar 12 21:13:09.708530 master-0 kubenswrapper[31456]: I0312 21:13:09.708482 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zp8\" (UniqueName: \"kubernetes.io/projected/66a747ac-6702-47d8-b2e5-a7d9ad827732-kube-api-access-64zp8\") pod \"66a747ac-6702-47d8-b2e5-a7d9ad827732\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " Mar 12 21:13:09.708602 master-0 kubenswrapper[31456]: I0312 21:13:09.708555 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/66a747ac-6702-47d8-b2e5-a7d9ad827732-ready\") pod \"66a747ac-6702-47d8-b2e5-a7d9ad827732\" (UID: \"66a747ac-6702-47d8-b2e5-a7d9ad827732\") " Mar 12 21:13:09.708602 master-0 kubenswrapper[31456]: I0312 21:13:09.708562 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66a747ac-6702-47d8-b2e5-a7d9ad827732-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "66a747ac-6702-47d8-b2e5-a7d9ad827732" (UID: "66a747ac-6702-47d8-b2e5-a7d9ad827732"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:13:09.709003 master-0 kubenswrapper[31456]: I0312 21:13:09.708973 31456 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/66a747ac-6702-47d8-b2e5-a7d9ad827732-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:09.709322 master-0 kubenswrapper[31456]: I0312 21:13:09.709265 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a747ac-6702-47d8-b2e5-a7d9ad827732-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "66a747ac-6702-47d8-b2e5-a7d9ad827732" (UID: "66a747ac-6702-47d8-b2e5-a7d9ad827732"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:13:09.709472 master-0 kubenswrapper[31456]: I0312 21:13:09.709411 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a747ac-6702-47d8-b2e5-a7d9ad827732-ready" (OuterVolumeSpecName: "ready") pod "66a747ac-6702-47d8-b2e5-a7d9ad827732" (UID: "66a747ac-6702-47d8-b2e5-a7d9ad827732"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:13:09.713069 master-0 kubenswrapper[31456]: I0312 21:13:09.711587 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a747ac-6702-47d8-b2e5-a7d9ad827732-kube-api-access-64zp8" (OuterVolumeSpecName: "kube-api-access-64zp8") pod "66a747ac-6702-47d8-b2e5-a7d9ad827732" (UID: "66a747ac-6702-47d8-b2e5-a7d9ad827732"). InnerVolumeSpecName "kube-api-access-64zp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:13:09.810556 master-0 kubenswrapper[31456]: I0312 21:13:09.810441 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zp8\" (UniqueName: \"kubernetes.io/projected/66a747ac-6702-47d8-b2e5-a7d9ad827732-kube-api-access-64zp8\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:09.810556 master-0 kubenswrapper[31456]: I0312 21:13:09.810497 31456 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/66a747ac-6702-47d8-b2e5-a7d9ad827732-ready\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:09.810556 master-0 kubenswrapper[31456]: I0312 21:13:09.810517 31456 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/66a747ac-6702-47d8-b2e5-a7d9ad827732-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:09.918479 master-0 kubenswrapper[31456]: I0312 21:13:09.918358 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"c58a6a80-48e7-428e-be7a-d81dfc726450","Type":"ContainerStarted","Data":"93af0f8bf81872fda73aa5c8b6d081c27e1b632575de6e7f9a4fa29a0ae3365f"} Mar 12 21:13:09.918479 master-0 kubenswrapper[31456]: I0312 21:13:09.918483 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"c58a6a80-48e7-428e-be7a-d81dfc726450","Type":"ContainerStarted","Data":"10fe00b148d399daa19476f19f50961c2dbe6bb7c9880ab7ed25de75f1d968f5"} Mar 12 21:13:09.920917 master-0 kubenswrapper[31456]: I0312 21:13:09.920803 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-s6flb_66a747ac-6702-47d8-b2e5-a7d9ad827732/kube-multus-additional-cni-plugins/0.log" Mar 12 21:13:09.921119 master-0 kubenswrapper[31456]: I0312 21:13:09.920912 31456 generic.go:334] "Generic (PLEG): container finished" podID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" exitCode=137 Mar 12 21:13:09.921119 master-0 kubenswrapper[31456]: I0312 21:13:09.920958 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" event={"ID":"66a747ac-6702-47d8-b2e5-a7d9ad827732","Type":"ContainerDied","Data":"64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576"} Mar 12 21:13:09.921119 master-0 kubenswrapper[31456]: I0312 21:13:09.921004 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" event={"ID":"66a747ac-6702-47d8-b2e5-a7d9ad827732","Type":"ContainerDied","Data":"05f4f14ed0baadf172821610677e739c4e402a42a643372d5a67655c12f69617"} Mar 12 21:13:09.921119 master-0 kubenswrapper[31456]: I0312 21:13:09.921036 31456 scope.go:117] "RemoveContainer" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" Mar 12 21:13:09.921420 master-0 kubenswrapper[31456]: I0312 21:13:09.921236 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-s6flb" Mar 12 21:13:09.945324 master-0 kubenswrapper[31456]: I0312 21:13:09.945234 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.9452151500000001 podStartE2EDuration="1.94521515s" podCreationTimestamp="2026-03-12 21:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:13:09.938454225 +0000 UTC m=+251.013059603" watchObservedRunningTime="2026-03-12 21:13:09.94521515 +0000 UTC m=+251.019820488" Mar 12 21:13:09.989785 master-0 kubenswrapper[31456]: I0312 21:13:09.989721 31456 scope.go:117] "RemoveContainer" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" Mar 12 21:13:09.990765 master-0 kubenswrapper[31456]: E0312 21:13:09.990657 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576\": container with ID starting with 64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576 not found: ID does not exist" containerID="64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576" Mar 12 21:13:09.990890 master-0 kubenswrapper[31456]: I0312 21:13:09.990839 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576"} err="failed to get container status \"64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576\": rpc error: code = NotFound desc = could not find container \"64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576\": container with ID starting with 64b9a05770bb310bea0404d8355c2aa38ee422f76a4b2231f1bff9d613de0576 not found: ID does not exist" Mar 12 21:13:09.998717 master-0 kubenswrapper[31456]: I0312 21:13:09.998677 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s6flb"] Mar 12 21:13:10.010586 master-0 kubenswrapper[31456]: I0312 21:13:10.010515 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-s6flb"] Mar 12 21:13:11.183896 master-0 kubenswrapper[31456]: I0312 21:13:11.183780 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" path="/var/lib/kubelet/pods/66a747ac-6702-47d8-b2e5-a7d9ad827732/volumes" Mar 12 21:13:16.998338 master-0 kubenswrapper[31456]: I0312 21:13:16.997993 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-tgbjx_b8aa8296-ed9b-4b37-8ab4-791b1342140f/multus-admission-controller/0.log" Mar 12 21:13:16.998338 master-0 kubenswrapper[31456]: I0312 21:13:16.998084 31456 generic.go:334] "Generic (PLEG): container finished" podID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerID="0801412eec909b7451c3ea16fc183a3c0aa018264741173074d4a6d25bbb8e1c" exitCode=137 Mar 12 21:13:16.998338 master-0 kubenswrapper[31456]: I0312 21:13:16.998129 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" event={"ID":"b8aa8296-ed9b-4b37-8ab4-791b1342140f","Type":"ContainerDied","Data":"0801412eec909b7451c3ea16fc183a3c0aa018264741173074d4a6d25bbb8e1c"} Mar 12 21:13:17.230533 master-0 kubenswrapper[31456]: I0312 21:13:17.230458 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-tgbjx_b8aa8296-ed9b-4b37-8ab4-791b1342140f/multus-admission-controller/0.log" Mar 12 21:13:17.230533 master-0 kubenswrapper[31456]: I0312 21:13:17.230545 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:13:17.401744 master-0 kubenswrapper[31456]: I0312 21:13:17.400557 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") pod \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " Mar 12 21:13:17.402195 master-0 kubenswrapper[31456]: I0312 21:13:17.401834 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") pod \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\" (UID: \"b8aa8296-ed9b-4b37-8ab4-791b1342140f\") " Mar 12 21:13:17.407046 master-0 kubenswrapper[31456]: I0312 21:13:17.406977 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts" (OuterVolumeSpecName: "kube-api-access-nbcts") pod "b8aa8296-ed9b-4b37-8ab4-791b1342140f" (UID: "b8aa8296-ed9b-4b37-8ab4-791b1342140f"). InnerVolumeSpecName "kube-api-access-nbcts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:13:17.407352 master-0 kubenswrapper[31456]: I0312 21:13:17.407274 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "b8aa8296-ed9b-4b37-8ab4-791b1342140f" (UID: "b8aa8296-ed9b-4b37-8ab4-791b1342140f"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:13:17.504593 master-0 kubenswrapper[31456]: I0312 21:13:17.504263 31456 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b8aa8296-ed9b-4b37-8ab4-791b1342140f-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:17.504593 master-0 kubenswrapper[31456]: I0312 21:13:17.504326 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbcts\" (UniqueName: \"kubernetes.io/projected/b8aa8296-ed9b-4b37-8ab4-791b1342140f-kube-api-access-nbcts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:18.009412 master-0 kubenswrapper[31456]: I0312 21:13:18.009315 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-tgbjx_b8aa8296-ed9b-4b37-8ab4-791b1342140f/multus-admission-controller/0.log" Mar 12 21:13:18.010367 master-0 kubenswrapper[31456]: I0312 21:13:18.009439 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" event={"ID":"b8aa8296-ed9b-4b37-8ab4-791b1342140f","Type":"ContainerDied","Data":"4c950507e89f9d50ecc81fde55a0e288bca97183fc18e65a4bf636fb9e195662"} Mar 12 21:13:18.010367 master-0 kubenswrapper[31456]: I0312 21:13:18.009495 31456 scope.go:117] "RemoveContainer" containerID="0217824df4e2de4a6e66903135737bb67e2b0fdba4f510dd20fc536aefc8d881" Mar 12 21:13:18.010367 master-0 kubenswrapper[31456]: I0312 21:13:18.009723 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-tgbjx" Mar 12 21:13:18.034447 master-0 kubenswrapper[31456]: I0312 21:13:18.034400 31456 scope.go:117] "RemoveContainer" containerID="0801412eec909b7451c3ea16fc183a3c0aa018264741173074d4a6d25bbb8e1c" Mar 12 21:13:18.075721 master-0 kubenswrapper[31456]: I0312 21:13:18.075650 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-tgbjx"] Mar 12 21:13:18.084020 master-0 kubenswrapper[31456]: I0312 21:13:18.083943 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-tgbjx"] Mar 12 21:13:19.192535 master-0 kubenswrapper[31456]: I0312 21:13:19.192445 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" path="/var/lib/kubelet/pods/b8aa8296-ed9b-4b37-8ab4-791b1342140f/volumes" Mar 12 21:13:19.740723 master-0 kubenswrapper[31456]: I0312 21:13:19.740641 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:13:19.749848 master-0 kubenswrapper[31456]: I0312 21:13:19.749732 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"installer-3-master-0\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 12 21:13:19.842265 master-0 kubenswrapper[31456]: I0312 21:13:19.842188 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") pod \"222b53b1-7e5c-49d5-9795-fec4d0547398\" (UID: \"222b53b1-7e5c-49d5-9795-fec4d0547398\") " Mar 12 21:13:19.846173 master-0 kubenswrapper[31456]: I0312 21:13:19.846099 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "222b53b1-7e5c-49d5-9795-fec4d0547398" (UID: "222b53b1-7e5c-49d5-9795-fec4d0547398"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:13:19.945173 master-0 kubenswrapper[31456]: I0312 21:13:19.945101 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222b53b1-7e5c-49d5-9795-fec4d0547398-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:39.900598 master-0 kubenswrapper[31456]: E0312 21:13:39.900462 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" podUID="41520992-0499-4a93-bd1c-7814ffb84164" Mar 12 21:13:40.224071 master-0 kubenswrapper[31456]: I0312 21:13:40.223898 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:13:42.876775 master-0 kubenswrapper[31456]: I0312 21:13:42.876677 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:13:42.881400 master-0 kubenswrapper[31456]: I0312 21:13:42.881341 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41520992-0499-4a93-bd1c-7814ffb84164-trusted-ca\") pod \"console-operator-6c7fb6b958-2lj8z\" (UID: \"41520992-0499-4a93-bd1c-7814ffb84164\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:13:42.928728 master-0 kubenswrapper[31456]: I0312 21:13:42.928655 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-6gf9b" Mar 12 21:13:42.936528 master-0 kubenswrapper[31456]: I0312 21:13:42.936426 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:13:43.529200 master-0 kubenswrapper[31456]: W0312 21:13:43.529031 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41520992_0499_4a93_bd1c_7814ffb84164.slice/crio-de4325d84f088bbbf36abbcaed71f6a850015c787dbf8ff2844204b84b385468 WatchSource:0}: Error finding container de4325d84f088bbbf36abbcaed71f6a850015c787dbf8ff2844204b84b385468: Status 404 returned error can't find the container with id de4325d84f088bbbf36abbcaed71f6a850015c787dbf8ff2844204b84b385468 Mar 12 21:13:43.530961 master-0 kubenswrapper[31456]: I0312 21:13:43.530878 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-2lj8z"] Mar 12 21:13:44.265398 master-0 kubenswrapper[31456]: I0312 21:13:44.265284 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" event={"ID":"41520992-0499-4a93-bd1c-7814ffb84164","Type":"ContainerStarted","Data":"de4325d84f088bbbf36abbcaed71f6a850015c787dbf8ff2844204b84b385468"} Mar 12 21:13:46.288324 master-0 kubenswrapper[31456]: I0312 21:13:46.288099 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" event={"ID":"41520992-0499-4a93-bd1c-7814ffb84164","Type":"ContainerStarted","Data":"80d28ede97f5aca1e6ade1ac687fb1b536c42a429660b949108da9675c6172a2"} Mar 12 21:13:46.289563 master-0 kubenswrapper[31456]: I0312 21:13:46.288930 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:13:46.322542 master-0 kubenswrapper[31456]: I0312 21:13:46.322391 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" podStartSLOduration=253.026698813 podStartE2EDuration="4m15.322343213s" podCreationTimestamp="2026-03-12 21:09:31 +0000 UTC" firstStartedPulling="2026-03-12 21:13:43.53144375 +0000 UTC m=+284.606049098" lastFinishedPulling="2026-03-12 21:13:45.82708816 +0000 UTC m=+286.901693498" observedRunningTime="2026-03-12 21:13:46.319664317 +0000 UTC m=+287.394269655" watchObservedRunningTime="2026-03-12 21:13:46.322343213 +0000 UTC m=+287.396948561" Mar 12 21:13:46.593745 master-0 kubenswrapper[31456]: I0312 21:13:46.593688 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-2lj8z" Mar 12 21:13:46.857820 master-0 kubenswrapper[31456]: I0312 21:13:46.857668 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-j2x97"] Mar 12 21:13:46.858085 master-0 kubenswrapper[31456]: E0312 21:13:46.858060 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" Mar 12 21:13:46.858085 master-0 kubenswrapper[31456]: I0312 21:13:46.858082 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" Mar 12 21:13:46.858161 master-0 kubenswrapper[31456]: E0312 21:13:46.858099 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="multus-admission-controller" Mar 12 21:13:46.858161 master-0 kubenswrapper[31456]: I0312 21:13:46.858107 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="multus-admission-controller" Mar 12 21:13:46.858161 master-0 kubenswrapper[31456]: E0312 21:13:46.858120 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="kube-rbac-proxy" Mar 12 21:13:46.858161 master-0 kubenswrapper[31456]: I0312 21:13:46.858127 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="kube-rbac-proxy" Mar 12 21:13:46.858307 master-0 kubenswrapper[31456]: I0312 21:13:46.858284 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="kube-rbac-proxy" Mar 12 21:13:46.858350 master-0 kubenswrapper[31456]: I0312 21:13:46.858339 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a747ac-6702-47d8-b2e5-a7d9ad827732" containerName="kube-multus-additional-cni-plugins" Mar 12 21:13:46.858398 master-0 kubenswrapper[31456]: I0312 21:13:46.858361 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aa8296-ed9b-4b37-8ab4-791b1342140f" containerName="multus-admission-controller" Mar 12 21:13:46.859001 master-0 kubenswrapper[31456]: I0312 21:13:46.858973 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:46.861391 master-0 kubenswrapper[31456]: I0312 21:13:46.861337 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 21:13:46.861534 master-0 kubenswrapper[31456]: I0312 21:13:46.861345 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 21:13:46.863393 master-0 kubenswrapper[31456]: I0312 21:13:46.863357 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-qh6sj" Mar 12 21:13:46.880531 master-0 kubenswrapper[31456]: I0312 21:13:46.880473 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-j2x97"] Mar 12 21:13:46.950828 master-0 kubenswrapper[31456]: I0312 21:13:46.950757 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhkg2\" (UniqueName: \"kubernetes.io/projected/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46-kube-api-access-rhkg2\") pod \"downloads-84f57b9877-j2x97\" (UID: \"47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46\") " pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:47.053348 master-0 kubenswrapper[31456]: I0312 21:13:47.053196 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhkg2\" (UniqueName: \"kubernetes.io/projected/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46-kube-api-access-rhkg2\") pod \"downloads-84f57b9877-j2x97\" (UID: \"47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46\") " pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:47.090127 master-0 kubenswrapper[31456]: I0312 21:13:47.090028 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhkg2\" (UniqueName: \"kubernetes.io/projected/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46-kube-api-access-rhkg2\") pod \"downloads-84f57b9877-j2x97\" (UID: \"47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46\") " pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:47.176658 master-0 kubenswrapper[31456]: I0312 21:13:47.176481 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:47.328779 master-0 kubenswrapper[31456]: I0312 21:13:47.327530 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:13:47.396662 master-0 kubenswrapper[31456]: I0312 21:13:47.396425 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:13:47.463090 master-0 kubenswrapper[31456]: I0312 21:13:47.461447 31456 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.463970 31456 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464109 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464252 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" containerID="cri-o://2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21" gracePeriod=15 Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464404 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b" gracePeriod=15 Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464450 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98" gracePeriod=15 Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464484 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f" gracePeriod=15 Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464522 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae" gracePeriod=15 Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464555 31456 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: E0312 21:13:47.464979 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.464995 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: E0312 21:13:47.465016 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.465026 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: E0312 21:13:47.465044 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.465052 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: E0312 21:13:47.465068 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.465077 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: E0312 21:13:47.465089 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.465098 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: E0312 21:13:47.465129 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 21:13:47.465151 master-0 kubenswrapper[31456]: I0312 21:13:47.465138 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 21:13:47.466972 master-0 kubenswrapper[31456]: I0312 21:13:47.465306 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 12 21:13:47.466972 master-0 kubenswrapper[31456]: I0312 21:13:47.465332 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 21:13:47.466972 master-0 kubenswrapper[31456]: I0312 21:13:47.465385 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 12 21:13:47.466972 master-0 kubenswrapper[31456]: I0312 21:13:47.465437 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 12 21:13:47.466972 master-0 kubenswrapper[31456]: I0312 21:13:47.465455 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 12 21:13:47.478593 master-0 kubenswrapper[31456]: I0312 21:13:47.478372 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="48512e02022680c9d90092634f0fc146" podUID="36d4251d3504cdc0ec85144c1379056c" Mar 12 21:13:47.563011 master-0 kubenswrapper[31456]: I0312 21:13:47.562450 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.563011 master-0 kubenswrapper[31456]: I0312 21:13:47.562500 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.563011 master-0 kubenswrapper[31456]: I0312 21:13:47.562529 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.563011 master-0 kubenswrapper[31456]: I0312 21:13:47.562555 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.563011 master-0 kubenswrapper[31456]: I0312 21:13:47.562583 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.563011 master-0 kubenswrapper[31456]: I0312 21:13:47.562609 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.563558 master-0 kubenswrapper[31456]: I0312 21:13:47.563238 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.563558 master-0 kubenswrapper[31456]: I0312 21:13:47.563366 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.602853 master-0 kubenswrapper[31456]: E0312 21:13:47.602379 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.664618 master-0 kubenswrapper[31456]: I0312 21:13:47.664492 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664557 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664754 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664780 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664804 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664883 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664930 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664850 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664971 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.664993 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.665011 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.665102 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665215 master-0 kubenswrapper[31456]: I0312 21:13:47.665173 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665832 master-0 kubenswrapper[31456]: I0312 21:13:47.665354 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.665832 master-0 kubenswrapper[31456]: I0312 21:13:47.665366 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.665832 master-0 kubenswrapper[31456]: I0312 21:13:47.665473 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:47.792139 master-0 kubenswrapper[31456]: E0312 21:13:47.792097 31456 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podc58a6a80_48e7_428e_be7a_d81dfc726450.slice/crio-93af0f8bf81872fda73aa5c8b6d081c27e1b632575de6e7f9a4fa29a0ae3365f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podc58a6a80_48e7_428e_be7a_d81dfc726450.slice/crio-conmon-93af0f8bf81872fda73aa5c8b6d081c27e1b632575de6e7f9a4fa29a0ae3365f.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:13:47.903864 master-0 kubenswrapper[31456]: I0312 21:13:47.903752 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: E0312 21:13:47.940455 31456 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1" Netns:"/var/run/netns/83667f56-9305-4ba5-a06b-b9f9737c35dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: > Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: E0312 21:13:47.940540 31456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1" Netns:"/var/run/netns/83667f56-9305-4ba5-a06b-b9f9737c35dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:13:47.940536 master-0 kubenswrapper[31456]: > pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:47.941206 master-0 kubenswrapper[31456]: E0312 21:13:47.940564 31456 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 12 21:13:47.941206 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1" Netns:"/var/run/netns/83667f56-9305-4ba5-a06b-b9f9737c35dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:13:47.941206 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:13:47.941206 master-0 kubenswrapper[31456]: > pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:47.941206 master-0 kubenswrapper[31456]: E0312 21:13:47.940633 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"downloads-84f57b9877-j2x97_openshift-console(47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"downloads-84f57b9877-j2x97_openshift-console(47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1\\\" Netns:\\\"/var/run/netns/83667f56-9305-4ba5-a06b-b9f9737c35dc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=5b21f0f16dde0b34c88176f550c0519ab4c91eade69e569d347ce6c1d247fdb1;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console/downloads-84f57b9877-j2x97" podUID="47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Mar 12 21:13:48.316644 master-0 kubenswrapper[31456]: I0312 21:13:48.316520 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b"} Mar 12 21:13:48.316644 master-0 kubenswrapper[31456]: I0312 21:13:48.316616 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"b790de8dd3c4ece1342287f8b78dc16e561ddf3c90cdc8f60d386a587037640c"} Mar 12 21:13:48.318402 master-0 kubenswrapper[31456]: E0312 21:13:48.318295 31456 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:13:48.322027 master-0 kubenswrapper[31456]: I0312 21:13:48.321892 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 21:13:48.323506 master-0 kubenswrapper[31456]: I0312 21:13:48.323464 31456 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b" exitCode=0 Mar 12 21:13:48.323506 master-0 kubenswrapper[31456]: I0312 21:13:48.323502 31456 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98" exitCode=0 Mar 12 21:13:48.323915 master-0 kubenswrapper[31456]: I0312 21:13:48.323519 31456 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f" exitCode=0 Mar 12 21:13:48.323915 master-0 kubenswrapper[31456]: I0312 21:13:48.323538 31456 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae" exitCode=2 Mar 12 21:13:48.327123 master-0 kubenswrapper[31456]: I0312 21:13:48.327072 31456 generic.go:334] "Generic (PLEG): container finished" podID="c58a6a80-48e7-428e-be7a-d81dfc726450" containerID="93af0f8bf81872fda73aa5c8b6d081c27e1b632575de6e7f9a4fa29a0ae3365f" exitCode=0 Mar 12 21:13:48.328129 master-0 kubenswrapper[31456]: I0312 21:13:48.328030 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:48.329254 master-0 kubenswrapper[31456]: I0312 21:13:48.328301 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"c58a6a80-48e7-428e-be7a-d81dfc726450","Type":"ContainerDied","Data":"93af0f8bf81872fda73aa5c8b6d081c27e1b632575de6e7f9a4fa29a0ae3365f"} Mar 12 21:13:48.329254 master-0 kubenswrapper[31456]: I0312 21:13:48.328766 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:48.330105 master-0 kubenswrapper[31456]: I0312 21:13:48.329866 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.369871 master-0 kubenswrapper[31456]: I0312 21:13:48.369760 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:13:48.371160 master-0 kubenswrapper[31456]: I0312 21:13:48.371064 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.372154 master-0 kubenswrapper[31456]: I0312 21:13:48.372085 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: E0312 21:13:48.497971 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: E0312 21:13:48.499003 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: E0312 21:13:48.499947 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: E0312 21:13:48.500734 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: E0312 21:13:48.501797 31456 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: I0312 21:13:48.501879 31456 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 21:13:48.503712 master-0 kubenswrapper[31456]: E0312 21:13:48.502561 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 12 21:13:48.705102 master-0 kubenswrapper[31456]: E0312 21:13:48.704876 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 12 21:13:49.000758 master-0 kubenswrapper[31456]: E0312 21:13:49.000651 31456 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 12 21:13:49.000758 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c" Netns:"/var/run/netns/c356c4b6-f58c-4390-b27a-713d2182f158" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:13:49.000758 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:13:49.000758 master-0 kubenswrapper[31456]: > Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: E0312 21:13:49.000869 31456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c" Netns:"/var/run/netns/c356c4b6-f58c-4390-b27a-713d2182f158" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: > pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: E0312 21:13:49.000908 31456 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c" Netns:"/var/run/netns/c356c4b6-f58c-4390-b27a-713d2182f158" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:13:49.001001 master-0 kubenswrapper[31456]: > pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:13:49.001308 master-0 kubenswrapper[31456]: E0312 21:13:49.001149 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"downloads-84f57b9877-j2x97_openshift-console(47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"downloads-84f57b9877-j2x97_openshift-console(47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c\\\" Netns:\\\"/var/run/netns/c356c4b6-f58c-4390-b27a-713d2182f158\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=ff164f5067e3444a43ace2caad193f6b1a936e9307682a5c81face7c9323510c;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console/downloads-84f57b9877-j2x97" podUID="47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Mar 12 21:13:49.107213 master-0 kubenswrapper[31456]: E0312 21:13:49.107151 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 12 21:13:49.178460 master-0 kubenswrapper[31456]: I0312 21:13:49.178347 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:49.179782 master-0 kubenswrapper[31456]: I0312 21:13:49.179727 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:49.778291 master-0 kubenswrapper[31456]: I0312 21:13:49.777581 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:49.778770 master-0 kubenswrapper[31456]: I0312 21:13:49.778487 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:49.779313 master-0 kubenswrapper[31456]: I0312 21:13:49.779267 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:49.908832 master-0 kubenswrapper[31456]: E0312 21:13:49.908750 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 12 21:13:49.927872 master-0 kubenswrapper[31456]: I0312 21:13:49.927778 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-var-lock\") pod \"c58a6a80-48e7-428e-be7a-d81dfc726450\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " Mar 12 21:13:49.927999 master-0 kubenswrapper[31456]: I0312 21:13:49.927953 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58a6a80-48e7-428e-be7a-d81dfc726450-kube-api-access\") pod \"c58a6a80-48e7-428e-be7a-d81dfc726450\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " Mar 12 21:13:49.928178 master-0 kubenswrapper[31456]: I0312 21:13:49.928134 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-var-lock" (OuterVolumeSpecName: "var-lock") pod "c58a6a80-48e7-428e-be7a-d81dfc726450" (UID: "c58a6a80-48e7-428e-be7a-d81dfc726450"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:13:49.928285 master-0 kubenswrapper[31456]: I0312 21:13:49.928225 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c58a6a80-48e7-428e-be7a-d81dfc726450" (UID: "c58a6a80-48e7-428e-be7a-d81dfc726450"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:13:49.928347 master-0 kubenswrapper[31456]: I0312 21:13:49.928163 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-kubelet-dir\") pod \"c58a6a80-48e7-428e-be7a-d81dfc726450\" (UID: \"c58a6a80-48e7-428e-be7a-d81dfc726450\") " Mar 12 21:13:49.929004 master-0 kubenswrapper[31456]: I0312 21:13:49.928957 31456 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:49.929004 master-0 kubenswrapper[31456]: I0312 21:13:49.929001 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c58a6a80-48e7-428e-be7a-d81dfc726450-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:49.930760 master-0 kubenswrapper[31456]: I0312 21:13:49.930696 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58a6a80-48e7-428e-be7a-d81dfc726450-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c58a6a80-48e7-428e-be7a-d81dfc726450" (UID: "c58a6a80-48e7-428e-be7a-d81dfc726450"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:13:49.944379 master-0 kubenswrapper[31456]: I0312 21:13:49.944309 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 21:13:49.945708 master-0 kubenswrapper[31456]: I0312 21:13:49.945656 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:49.947142 master-0 kubenswrapper[31456]: I0312 21:13:49.947068 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:49.948059 master-0 kubenswrapper[31456]: I0312 21:13:49.948004 31456 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:49.948883 master-0 kubenswrapper[31456]: I0312 21:13:49.948771 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.031062 master-0 kubenswrapper[31456]: I0312 21:13:50.030929 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 21:13:50.031480 master-0 kubenswrapper[31456]: I0312 21:13:50.031452 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 21:13:50.031778 master-0 kubenswrapper[31456]: I0312 21:13:50.031013 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:13:50.031778 master-0 kubenswrapper[31456]: I0312 21:13:50.031521 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:13:50.032006 master-0 kubenswrapper[31456]: I0312 21:13:50.031910 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 12 21:13:50.032256 master-0 kubenswrapper[31456]: I0312 21:13:50.031895 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:13:50.032939 master-0 kubenswrapper[31456]: I0312 21:13:50.032900 31456 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:50.032939 master-0 kubenswrapper[31456]: I0312 21:13:50.032935 31456 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:50.033188 master-0 kubenswrapper[31456]: I0312 21:13:50.032954 31456 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:50.033188 master-0 kubenswrapper[31456]: I0312 21:13:50.032974 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c58a6a80-48e7-428e-be7a-d81dfc726450-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:13:50.353628 master-0 kubenswrapper[31456]: I0312 21:13:50.353531 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 12 21:13:50.355154 master-0 kubenswrapper[31456]: I0312 21:13:50.355085 31456 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21" exitCode=0 Mar 12 21:13:50.355405 master-0 kubenswrapper[31456]: I0312 21:13:50.355251 31456 scope.go:117] "RemoveContainer" containerID="4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b" Mar 12 21:13:50.355510 master-0 kubenswrapper[31456]: I0312 21:13:50.355280 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:13:50.358958 master-0 kubenswrapper[31456]: I0312 21:13:50.358178 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"c58a6a80-48e7-428e-be7a-d81dfc726450","Type":"ContainerDied","Data":"10fe00b148d399daa19476f19f50961c2dbe6bb7c9880ab7ed25de75f1d968f5"} Mar 12 21:13:50.358958 master-0 kubenswrapper[31456]: I0312 21:13:50.358233 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10fe00b148d399daa19476f19f50961c2dbe6bb7c9880ab7ed25de75f1d968f5" Mar 12 21:13:50.358958 master-0 kubenswrapper[31456]: I0312 21:13:50.358320 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 12 21:13:50.383388 master-0 kubenswrapper[31456]: I0312 21:13:50.383303 31456 scope.go:117] "RemoveContainer" containerID="30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98" Mar 12 21:13:50.396261 master-0 kubenswrapper[31456]: I0312 21:13:50.396205 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.397298 master-0 kubenswrapper[31456]: I0312 21:13:50.397231 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.398658 master-0 kubenswrapper[31456]: I0312 21:13:50.398570 31456 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.409068 master-0 kubenswrapper[31456]: I0312 21:13:50.409016 31456 scope.go:117] "RemoveContainer" containerID="fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f" Mar 12 21:13:50.409634 master-0 kubenswrapper[31456]: I0312 21:13:50.409522 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.410716 master-0 kubenswrapper[31456]: I0312 21:13:50.410642 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.411571 master-0 kubenswrapper[31456]: I0312 21:13:50.411489 31456 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:50.441921 master-0 kubenswrapper[31456]: I0312 21:13:50.441865 31456 scope.go:117] "RemoveContainer" containerID="e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae" Mar 12 21:13:50.468325 master-0 kubenswrapper[31456]: I0312 21:13:50.467945 31456 scope.go:117] "RemoveContainer" containerID="2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21" Mar 12 21:13:50.510747 master-0 kubenswrapper[31456]: I0312 21:13:50.510648 31456 scope.go:117] "RemoveContainer" containerID="ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60" Mar 12 21:13:50.545340 master-0 kubenswrapper[31456]: I0312 21:13:50.544085 31456 scope.go:117] "RemoveContainer" containerID="4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b" Mar 12 21:13:50.545340 master-0 kubenswrapper[31456]: E0312 21:13:50.545118 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b\": container with ID starting with 4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b not found: ID does not exist" containerID="4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b" Mar 12 21:13:50.545340 master-0 kubenswrapper[31456]: I0312 21:13:50.545207 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b"} err="failed to get container status \"4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b\": rpc error: code = NotFound desc = could not find container \"4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b\": container with ID starting with 4f144453d44ce86a0d7bd7fe15a62aadd5592eaf9c0618e7028c5d055870b33b not found: ID does not exist" Mar 12 21:13:50.545340 master-0 kubenswrapper[31456]: I0312 21:13:50.545277 31456 scope.go:117] "RemoveContainer" containerID="30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98" Mar 12 21:13:50.546283 master-0 kubenswrapper[31456]: E0312 21:13:50.546119 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98\": container with ID starting with 30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98 not found: ID does not exist" containerID="30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98" Mar 12 21:13:50.546283 master-0 kubenswrapper[31456]: I0312 21:13:50.546251 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98"} err="failed to get container status \"30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98\": rpc error: code = NotFound desc = could not find container \"30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98\": container with ID starting with 30bc9b247c27238c3eb4ad1976ad2cf0929403a4441faf1cefc74e18c8f37e98 not found: ID does not exist" Mar 12 21:13:50.546520 master-0 kubenswrapper[31456]: I0312 21:13:50.546312 31456 scope.go:117] "RemoveContainer" containerID="fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f" Mar 12 21:13:50.546960 master-0 kubenswrapper[31456]: E0312 21:13:50.546890 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f\": container with ID starting with fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f not found: ID does not exist" containerID="fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f" Mar 12 21:13:50.547300 master-0 kubenswrapper[31456]: I0312 21:13:50.546969 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f"} err="failed to get container status \"fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f\": rpc error: code = NotFound desc = could not find container \"fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f\": container with ID starting with fa08db51a6d0fb71252af3791bc9bb2d78f468b9196d06ba9f3e5e5c3d6b5f8f not found: ID does not exist" Mar 12 21:13:50.547300 master-0 kubenswrapper[31456]: I0312 21:13:50.547053 31456 scope.go:117] "RemoveContainer" containerID="e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae" Mar 12 21:13:50.547667 master-0 kubenswrapper[31456]: E0312 21:13:50.547608 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae\": container with ID starting with e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae not found: ID does not exist" containerID="e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae" Mar 12 21:13:50.547667 master-0 kubenswrapper[31456]: I0312 21:13:50.547665 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae"} err="failed to get container status \"e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae\": rpc error: code = NotFound desc = could not find container \"e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae\": container with ID starting with e9eeff91b485b3f4abe88559591484bd0ad23b44d8b5e79acbd75e6b1fa6f5ae not found: ID does not exist" Mar 12 21:13:50.548081 master-0 kubenswrapper[31456]: I0312 21:13:50.547699 31456 scope.go:117] "RemoveContainer" containerID="2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21" Mar 12 21:13:50.548433 master-0 kubenswrapper[31456]: E0312 21:13:50.548376 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21\": container with ID starting with 2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21 not found: ID does not exist" containerID="2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21" Mar 12 21:13:50.548524 master-0 kubenswrapper[31456]: I0312 21:13:50.548436 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21"} err="failed to get container status \"2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21\": rpc error: code = NotFound desc = could not find container \"2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21\": container with ID starting with 2af82b5203922bffd1b52e551e34bf559e247f6df99d1b27190c8c1ceb99cc21 not found: ID does not exist" Mar 12 21:13:50.548524 master-0 kubenswrapper[31456]: I0312 21:13:50.548467 31456 scope.go:117] "RemoveContainer" containerID="ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60" Mar 12 21:13:50.549671 master-0 kubenswrapper[31456]: E0312 21:13:50.549540 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60\": container with ID starting with ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60 not found: ID does not exist" containerID="ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60" Mar 12 21:13:50.549671 master-0 kubenswrapper[31456]: I0312 21:13:50.549620 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60"} err="failed to get container status \"ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60\": rpc error: code = NotFound desc = could not find container \"ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60\": container with ID starting with ae2d426e85e9ca74fba20ed4929c9868f9bf891aa6e3acbc48f77b8fd37d7f60 not found: ID does not exist" Mar 12 21:13:51.188512 master-0 kubenswrapper[31456]: I0312 21:13:51.188438 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48512e02022680c9d90092634f0fc146" path="/var/lib/kubelet/pods/48512e02022680c9d90092634f0fc146/volumes" Mar 12 21:13:51.511007 master-0 kubenswrapper[31456]: E0312 21:13:51.510740 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 12 21:13:52.504289 master-0 kubenswrapper[31456]: E0312 21:13:52.503974 31456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189c347451739d91 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:48512e02022680c9d90092634f0fc146,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:13:47.464514961 +0000 UTC m=+288.539120299,LastTimestamp:2026-03-12 21:13:47.464514961 +0000 UTC m=+288.539120299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:13:54.713150 master-0 kubenswrapper[31456]: E0312 21:13:54.713048 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 12 21:13:56.618591 master-0 kubenswrapper[31456]: E0312 21:13:56.618386 31456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189c347451739d91 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:48512e02022680c9d90092634f0fc146,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-12 21:13:47.464514961 +0000 UTC m=+288.539120299,LastTimestamp:2026-03-12 21:13:47.464514961 +0000 UTC m=+288.539120299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 12 21:13:59.168485 master-0 kubenswrapper[31456]: I0312 21:13:59.168402 31456 kubelet.go:1505] "Image garbage collection succeeded" Mar 12 21:13:59.176220 master-0 kubenswrapper[31456]: I0312 21:13:59.176143 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:13:59.177109 master-0 kubenswrapper[31456]: I0312 21:13:59.177039 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:01.114480 master-0 kubenswrapper[31456]: E0312 21:14:01.114402 31456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 12 21:14:01.484093 master-0 kubenswrapper[31456]: I0312 21:14:01.484028 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/1.log" Mar 12 21:14:01.485309 master-0 kubenswrapper[31456]: I0312 21:14:01.485259 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:14:01.487135 master-0 kubenswrapper[31456]: I0312 21:14:01.487092 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/0.log" Mar 12 21:14:01.487284 master-0 kubenswrapper[31456]: I0312 21:14:01.487161 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" exitCode=1 Mar 12 21:14:01.487284 master-0 kubenswrapper[31456]: I0312 21:14:01.487202 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerDied","Data":"d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6"} Mar 12 21:14:01.487284 master-0 kubenswrapper[31456]: I0312 21:14:01.487252 31456 scope.go:117] "RemoveContainer" containerID="d3c7faffe68717f40a0072b4ab6a64ec7cccad22e04a4674b15d395e19ec5ebe" Mar 12 21:14:01.488738 master-0 kubenswrapper[31456]: I0312 21:14:01.488674 31456 scope.go:117] "RemoveContainer" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" Mar 12 21:14:01.489087 master-0 kubenswrapper[31456]: I0312 21:14:01.488988 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:01.490448 master-0 kubenswrapper[31456]: E0312 21:14:01.489862 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:14:01.490853 master-0 kubenswrapper[31456]: I0312 21:14:01.490759 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:01.492427 master-0 kubenswrapper[31456]: I0312 21:14:01.492347 31456 status_manager.go:851] "Failed to get status for pod" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:02.500313 master-0 kubenswrapper[31456]: I0312 21:14:02.500189 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/1.log" Mar 12 21:14:02.502144 master-0 kubenswrapper[31456]: I0312 21:14:02.502082 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:14:03.169287 master-0 kubenswrapper[31456]: I0312 21:14:03.169220 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:03.172541 master-0 kubenswrapper[31456]: I0312 21:14:03.171189 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:03.172541 master-0 kubenswrapper[31456]: I0312 21:14:03.172176 31456 status_manager.go:851] "Failed to get status for pod" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:03.173956 master-0 kubenswrapper[31456]: I0312 21:14:03.173023 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:03.194150 master-0 kubenswrapper[31456]: I0312 21:14:03.194092 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:03.194150 master-0 kubenswrapper[31456]: I0312 21:14:03.194143 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:03.195193 master-0 kubenswrapper[31456]: E0312 21:14:03.195134 31456 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:03.195774 master-0 kubenswrapper[31456]: I0312 21:14:03.195735 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:03.226760 master-0 kubenswrapper[31456]: W0312 21:14:03.226672 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d4251d3504cdc0ec85144c1379056c.slice/crio-fd6c9d56a9e14bf939bbc7449c0ca1fd8877e90e78764d549669a161120b4a39 WatchSource:0}: Error finding container fd6c9d56a9e14bf939bbc7449c0ca1fd8877e90e78764d549669a161120b4a39: Status 404 returned error can't find the container with id fd6c9d56a9e14bf939bbc7449c0ca1fd8877e90e78764d549669a161120b4a39 Mar 12 21:14:03.403706 master-0 kubenswrapper[31456]: I0312 21:14:03.403638 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:03.404639 master-0 kubenswrapper[31456]: I0312 21:14:03.404513 31456 scope.go:117] "RemoveContainer" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" Mar 12 21:14:03.405042 master-0 kubenswrapper[31456]: E0312 21:14:03.405002 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:14:03.405428 master-0 kubenswrapper[31456]: I0312 21:14:03.405386 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:03.406471 master-0 kubenswrapper[31456]: I0312 21:14:03.406423 31456 status_manager.go:851] "Failed to get status for pod" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:03.407455 master-0 kubenswrapper[31456]: I0312 21:14:03.407403 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:03.522077 master-0 kubenswrapper[31456]: I0312 21:14:03.520545 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"fd6c9d56a9e14bf939bbc7449c0ca1fd8877e90e78764d549669a161120b4a39"} Mar 12 21:14:04.169518 master-0 kubenswrapper[31456]: I0312 21:14:04.169433 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:04.170531 master-0 kubenswrapper[31456]: I0312 21:14:04.170481 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:04.535519 master-0 kubenswrapper[31456]: I0312 21:14:04.535343 31456 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="14bd84dc9657c6f48e56e9598e92a2496dc53d499fd5007468ffa6c7069f2bc7" exitCode=0 Mar 12 21:14:04.535519 master-0 kubenswrapper[31456]: I0312 21:14:04.535412 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerDied","Data":"14bd84dc9657c6f48e56e9598e92a2496dc53d499fd5007468ffa6c7069f2bc7"} Mar 12 21:14:04.536434 master-0 kubenswrapper[31456]: I0312 21:14:04.535841 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:04.536434 master-0 kubenswrapper[31456]: I0312 21:14:04.535886 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:04.536771 master-0 kubenswrapper[31456]: I0312 21:14:04.536694 31456 status_manager.go:851] "Failed to get status for pod" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:04.536913 master-0 kubenswrapper[31456]: E0312 21:14:04.536835 31456 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:04.538333 master-0 kubenswrapper[31456]: I0312 21:14:04.537677 31456 status_manager.go:851] "Failed to get status for pod" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:04.538760 master-0 kubenswrapper[31456]: I0312 21:14:04.538697 31456 status_manager.go:851] "Failed to get status for pod" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 12 21:14:05.003064 master-0 kubenswrapper[31456]: E0312 21:14:05.002954 31456 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 12 21:14:05.003064 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28" Netns:"/var/run/netns/25bcad4e-0f0f-4e45-bf4b-bc2813203394" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:14:05.003064 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:14:05.003064 master-0 kubenswrapper[31456]: > Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: E0312 21:14:05.003084 31456 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28" Netns:"/var/run/netns/25bcad4e-0f0f-4e45-bf4b-bc2813203394" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: > pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: E0312 21:14:05.003136 31456 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28" Netns:"/var/run/netns/25bcad4e-0f0f-4e45-bf4b-bc2813203394" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Path:"" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: > pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:05.003575 master-0 kubenswrapper[31456]: E0312 21:14:05.003282 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"downloads-84f57b9877-j2x97_openshift-console(47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"downloads-84f57b9877-j2x97_openshift-console(47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_downloads-84f57b9877-j2x97_openshift-console_47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46_0(b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28): error adding pod openshift-console_downloads-84f57b9877-j2x97 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28\\\" Netns:\\\"/var/run/netns/25bcad4e-0f0f-4e45-bf4b-bc2813203394\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=downloads-84f57b9877-j2x97;K8S_POD_INFRA_CONTAINER_ID=b9c5a2228f64d3e5eda8bcf711d79b9dac79e30cd77d6c9533192cbda5015d28;K8S_POD_UID=47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console/downloads-84f57b9877-j2x97] networking: Multus: [openshift-console/downloads-84f57b9877-j2x97/47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: SetNetworkStatus: failed to update the pod downloads-84f57b9877-j2x97 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-j2x97?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console/downloads-84f57b9877-j2x97" podUID="47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" Mar 12 21:14:05.565162 master-0 kubenswrapper[31456]: I0312 21:14:05.565105 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"ee7ba234ea7c290679b97aa71f3b947aadc87e09f925b21a03325242a9479bd3"} Mar 12 21:14:05.565162 master-0 kubenswrapper[31456]: I0312 21:14:05.565152 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"3be5f373912ff4aa2b0ba3b3a328bd2f0831b8a662cfa00806c1761e2a60d9b1"} Mar 12 21:14:06.576371 master-0 kubenswrapper[31456]: I0312 21:14:06.576325 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"a0828469a0c22817c5ca1faa37751ed43c46dd539a2b2842c5b8173e514009fe"} Mar 12 21:14:06.576371 master-0 kubenswrapper[31456]: I0312 21:14:06.576371 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"d0f95026a1f4725ca484307f82c3b03abdb678c9d0eac96b3da14f2f41595444"} Mar 12 21:14:06.576371 master-0 kubenswrapper[31456]: I0312 21:14:06.576381 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"3b66d45bb12f22b270e74d10ff5a80a9b98aacaec43689e1dd88e237721c6648"} Mar 12 21:14:06.576939 master-0 kubenswrapper[31456]: I0312 21:14:06.576586 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:06.576939 master-0 kubenswrapper[31456]: I0312 21:14:06.576613 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:06.576939 master-0 kubenswrapper[31456]: I0312 21:14:06.576627 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:07.467934 master-0 kubenswrapper[31456]: I0312 21:14:07.467851 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:07.468611 master-0 kubenswrapper[31456]: I0312 21:14:07.468535 31456 scope.go:117] "RemoveContainer" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" Mar 12 21:14:07.468913 master-0 kubenswrapper[31456]: E0312 21:14:07.468857 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:14:08.196222 master-0 kubenswrapper[31456]: I0312 21:14:08.196139 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:08.197094 master-0 kubenswrapper[31456]: I0312 21:14:08.196360 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:08.208865 master-0 kubenswrapper[31456]: I0312 21:14:08.208760 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:09.388753 master-0 kubenswrapper[31456]: I0312 21:14:09.388674 31456 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:09.389859 master-0 kubenswrapper[31456]: I0312 21:14:09.389772 31456 scope.go:117] "RemoveContainer" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" Mar 12 21:14:09.393987 master-0 kubenswrapper[31456]: E0312 21:14:09.393361 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(7678a2e61b792fe3be55b1c6f67b2aa2)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" Mar 12 21:14:10.472488 master-0 kubenswrapper[31456]: I0312 21:14:10.472424 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:14:10.609699 master-0 kubenswrapper[31456]: I0312 21:14:10.609636 31456 generic.go:334] "Generic (PLEG): container finished" podID="33beea0b-f77b-4388-a9c8-5710f084f961" containerID="41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a" exitCode=0 Mar 12 21:14:10.609699 master-0 kubenswrapper[31456]: I0312 21:14:10.609700 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" event={"ID":"33beea0b-f77b-4388-a9c8-5710f084f961","Type":"ContainerDied","Data":"41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a"} Mar 12 21:14:10.610012 master-0 kubenswrapper[31456]: I0312 21:14:10.609725 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" Mar 12 21:14:10.610012 master-0 kubenswrapper[31456]: I0312 21:14:10.609757 31456 scope.go:117] "RemoveContainer" containerID="41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a" Mar 12 21:14:10.610012 master-0 kubenswrapper[31456]: I0312 21:14:10.609738 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5bbfd655db-2tsb8" event={"ID":"33beea0b-f77b-4388-a9c8-5710f084f961","Type":"ContainerDied","Data":"c3b62ea86d8f9e58d8904eae05a729e79a10c095aa97e46111824c4941e548aa"} Mar 12 21:14:10.639063 master-0 kubenswrapper[31456]: I0312 21:14:10.639009 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.639229 master-0 kubenswrapper[31456]: I0312 21:14:10.639128 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.639229 master-0 kubenswrapper[31456]: I0312 21:14:10.639150 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.639229 master-0 kubenswrapper[31456]: I0312 21:14:10.639180 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.639229 master-0 kubenswrapper[31456]: I0312 21:14:10.639228 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.639490 master-0 kubenswrapper[31456]: I0312 21:14:10.639277 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.639490 master-0 kubenswrapper[31456]: I0312 21:14:10.639307 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") pod \"33beea0b-f77b-4388-a9c8-5710f084f961\" (UID: \"33beea0b-f77b-4388-a9c8-5710f084f961\") " Mar 12 21:14:10.640231 master-0 kubenswrapper[31456]: I0312 21:14:10.640186 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:14:10.640965 master-0 kubenswrapper[31456]: I0312 21:14:10.640880 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log" (OuterVolumeSpecName: "audit-log") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:14:10.641258 master-0 kubenswrapper[31456]: I0312 21:14:10.641187 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:14:10.645243 master-0 kubenswrapper[31456]: I0312 21:14:10.645164 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:14:10.646485 master-0 kubenswrapper[31456]: I0312 21:14:10.646438 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl" (OuterVolumeSpecName: "kube-api-access-clmjl") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "kube-api-access-clmjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:14:10.647415 master-0 kubenswrapper[31456]: I0312 21:14:10.647376 31456 scope.go:117] "RemoveContainer" containerID="41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a" Mar 12 21:14:10.647614 master-0 kubenswrapper[31456]: I0312 21:14:10.647517 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:14:10.648376 master-0 kubenswrapper[31456]: E0312 21:14:10.648307 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a\": container with ID starting with 41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a not found: ID does not exist" containerID="41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a" Mar 12 21:14:10.648469 master-0 kubenswrapper[31456]: I0312 21:14:10.648380 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a"} err="failed to get container status \"41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a\": rpc error: code = NotFound desc = could not find container \"41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a\": container with ID starting with 41a3e30c6d901d9b64d6fa8e2b3f70dcb07dc618b579112d28d71b51408b9a9a not found: ID does not exist" Mar 12 21:14:10.648845 master-0 kubenswrapper[31456]: I0312 21:14:10.648755 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "33beea0b-f77b-4388-a9c8-5710f084f961" (UID: "33beea0b-f77b-4388-a9c8-5710f084f961"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:14:10.742095 master-0 kubenswrapper[31456]: I0312 21:14:10.741970 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clmjl\" (UniqueName: \"kubernetes.io/projected/33beea0b-f77b-4388-a9c8-5710f084f961-kube-api-access-clmjl\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:10.742095 master-0 kubenswrapper[31456]: I0312 21:14:10.742063 31456 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/33beea0b-f77b-4388-a9c8-5710f084f961-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:10.742095 master-0 kubenswrapper[31456]: I0312 21:14:10.742090 31456 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:10.742380 master-0 kubenswrapper[31456]: I0312 21:14:10.742110 31456 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:10.742380 master-0 kubenswrapper[31456]: I0312 21:14:10.742134 31456 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:10.742380 master-0 kubenswrapper[31456]: I0312 21:14:10.742154 31456 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/33beea0b-f77b-4388-a9c8-5710f084f961-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:10.742380 master-0 kubenswrapper[31456]: I0312 21:14:10.742174 31456 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/33beea0b-f77b-4388-a9c8-5710f084f961-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:11.590485 master-0 kubenswrapper[31456]: I0312 21:14:11.590426 31456 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:11.617199 master-0 kubenswrapper[31456]: I0312 21:14:11.617154 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:11.617199 master-0 kubenswrapper[31456]: I0312 21:14:11.617185 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:11.620847 master-0 kubenswrapper[31456]: I0312 21:14:11.620791 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:11.662858 master-0 kubenswrapper[31456]: I0312 21:14:11.662763 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="a51e149e-ef14-49ee-a47d-fec16c63a725" Mar 12 21:14:12.626911 master-0 kubenswrapper[31456]: I0312 21:14:12.626790 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:12.626911 master-0 kubenswrapper[31456]: I0312 21:14:12.626891 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f00bf6ed-8795-4b8c-b36b-ec42642f70bf" Mar 12 21:14:17.169531 master-0 kubenswrapper[31456]: I0312 21:14:17.169446 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:17.170830 master-0 kubenswrapper[31456]: I0312 21:14:17.170749 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:17.647071 master-0 kubenswrapper[31456]: W0312 21:14:17.647003 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47fc5bc0_b234_48bb_b8f8_5ef5e56f1a46.slice/crio-54ab158d137dbdced035183518362937da1d0b691e6ca220bed72c0ecfdcb728 WatchSource:0}: Error finding container 54ab158d137dbdced035183518362937da1d0b691e6ca220bed72c0ecfdcb728: Status 404 returned error can't find the container with id 54ab158d137dbdced035183518362937da1d0b691e6ca220bed72c0ecfdcb728 Mar 12 21:14:17.711783 master-0 kubenswrapper[31456]: I0312 21:14:17.689274 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-j2x97" event={"ID":"47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46","Type":"ContainerStarted","Data":"54ab158d137dbdced035183518362937da1d0b691e6ca220bed72c0ecfdcb728"} Mar 12 21:14:19.196763 master-0 kubenswrapper[31456]: I0312 21:14:19.196665 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="a51e149e-ef14-49ee-a47d-fec16c63a725" Mar 12 21:14:21.193847 master-0 kubenswrapper[31456]: I0312 21:14:21.188711 31456 scope.go:117] "RemoveContainer" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" Mar 12 21:14:21.729020 master-0 kubenswrapper[31456]: I0312 21:14:21.728875 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/1.log" Mar 12 21:14:21.729955 master-0 kubenswrapper[31456]: I0312 21:14:21.729800 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:14:21.731481 master-0 kubenswrapper[31456]: I0312 21:14:21.730425 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"7678a2e61b792fe3be55b1c6f67b2aa2","Type":"ContainerStarted","Data":"0b060c904cf7244304798fca1e2e5fa54709b958c12481b7403d731a220633b8"} Mar 12 21:14:21.983844 master-0 kubenswrapper[31456]: I0312 21:14:21.983624 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 21:14:22.048902 master-0 kubenswrapper[31456]: I0312 21:14:22.047767 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-h7jv4" Mar 12 21:14:22.087585 master-0 kubenswrapper[31456]: I0312 21:14:22.087271 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 12 21:14:22.102617 master-0 kubenswrapper[31456]: I0312 21:14:22.102545 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 12 21:14:22.169257 master-0 kubenswrapper[31456]: I0312 21:14:22.168883 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 21:14:22.332428 master-0 kubenswrapper[31456]: I0312 21:14:22.332386 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 21:14:22.708537 master-0 kubenswrapper[31456]: I0312 21:14:22.708420 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 21:14:22.715060 master-0 kubenswrapper[31456]: I0312 21:14:22.714976 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 21:14:23.062259 master-0 kubenswrapper[31456]: I0312 21:14:23.062088 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 21:14:23.170750 master-0 kubenswrapper[31456]: I0312 21:14:23.170696 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 21:14:23.244217 master-0 kubenswrapper[31456]: I0312 21:14:23.244153 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 21:14:23.260346 master-0 kubenswrapper[31456]: I0312 21:14:23.260293 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 21:14:23.405363 master-0 kubenswrapper[31456]: I0312 21:14:23.405281 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:23.703117 master-0 kubenswrapper[31456]: I0312 21:14:23.702945 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 21:14:23.967386 master-0 kubenswrapper[31456]: I0312 21:14:23.967270 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 21:14:24.062256 master-0 kubenswrapper[31456]: I0312 21:14:24.062219 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 12 21:14:24.183386 master-0 kubenswrapper[31456]: I0312 21:14:24.183326 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 21:14:24.378931 master-0 kubenswrapper[31456]: I0312 21:14:24.378792 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 21:14:24.511598 master-0 kubenswrapper[31456]: I0312 21:14:24.511538 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 21:14:24.527770 master-0 kubenswrapper[31456]: I0312 21:14:24.527722 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 21:14:24.647360 master-0 kubenswrapper[31456]: I0312 21:14:24.647182 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 21:14:24.658311 master-0 kubenswrapper[31456]: I0312 21:14:24.658228 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 21:14:24.703180 master-0 kubenswrapper[31456]: I0312 21:14:24.702436 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 21:14:24.745988 master-0 kubenswrapper[31456]: I0312 21:14:24.745863 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 21:14:24.768915 master-0 kubenswrapper[31456]: I0312 21:14:24.768804 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 21:14:24.946212 master-0 kubenswrapper[31456]: I0312 21:14:24.946078 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 21:14:24.946427 master-0 kubenswrapper[31456]: I0312 21:14:24.946211 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 12 21:14:25.015351 master-0 kubenswrapper[31456]: I0312 21:14:25.015288 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 21:14:25.151722 master-0 kubenswrapper[31456]: I0312 21:14:25.151681 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 21:14:25.174455 master-0 kubenswrapper[31456]: I0312 21:14:25.174404 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 21:14:25.211212 master-0 kubenswrapper[31456]: I0312 21:14:25.211109 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 21:14:25.279501 master-0 kubenswrapper[31456]: I0312 21:14:25.279458 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 21:14:25.296413 master-0 kubenswrapper[31456]: I0312 21:14:25.296362 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 21:14:25.309550 master-0 kubenswrapper[31456]: I0312 21:14:25.309515 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 21:14:25.340763 master-0 kubenswrapper[31456]: I0312 21:14:25.340708 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 21:14:25.391375 master-0 kubenswrapper[31456]: I0312 21:14:25.391331 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 21:14:25.409479 master-0 kubenswrapper[31456]: I0312 21:14:25.409372 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 12 21:14:25.505024 master-0 kubenswrapper[31456]: I0312 21:14:25.504931 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 21:14:25.559678 master-0 kubenswrapper[31456]: I0312 21:14:25.559585 31456 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 21:14:25.613959 master-0 kubenswrapper[31456]: I0312 21:14:25.613685 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 21:14:25.685212 master-0 kubenswrapper[31456]: I0312 21:14:25.685131 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 21:14:25.722610 master-0 kubenswrapper[31456]: I0312 21:14:25.722519 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-rgtlp" Mar 12 21:14:25.750126 master-0 kubenswrapper[31456]: I0312 21:14:25.750047 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-6gf9b" Mar 12 21:14:26.072354 master-0 kubenswrapper[31456]: I0312 21:14:26.072290 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 21:14:26.102423 master-0 kubenswrapper[31456]: I0312 21:14:26.102353 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 21:14:26.229462 master-0 kubenswrapper[31456]: I0312 21:14:26.229376 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 21:14:26.271433 master-0 kubenswrapper[31456]: I0312 21:14:26.271364 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 21:14:26.381964 master-0 kubenswrapper[31456]: I0312 21:14:26.381788 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-f29rj" Mar 12 21:14:26.428526 master-0 kubenswrapper[31456]: I0312 21:14:26.428448 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 21:14:26.462630 master-0 kubenswrapper[31456]: I0312 21:14:26.462530 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 21:14:26.485520 master-0 kubenswrapper[31456]: I0312 21:14:26.485396 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 21:14:26.496297 master-0 kubenswrapper[31456]: I0312 21:14:26.496258 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 21:14:26.513612 master-0 kubenswrapper[31456]: I0312 21:14:26.513543 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-9n54f" Mar 12 21:14:26.543351 master-0 kubenswrapper[31456]: I0312 21:14:26.543299 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 12 21:14:26.581544 master-0 kubenswrapper[31456]: I0312 21:14:26.581473 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 12 21:14:26.582185 master-0 kubenswrapper[31456]: I0312 21:14:26.581589 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 12 21:14:26.602695 master-0 kubenswrapper[31456]: I0312 21:14:26.602617 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 21:14:26.606948 master-0 kubenswrapper[31456]: I0312 21:14:26.606906 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 21:14:26.639794 master-0 kubenswrapper[31456]: I0312 21:14:26.639622 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 21:14:26.656111 master-0 kubenswrapper[31456]: I0312 21:14:26.656018 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 21:14:26.692922 master-0 kubenswrapper[31456]: I0312 21:14:26.692801 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 21:14:26.741212 master-0 kubenswrapper[31456]: I0312 21:14:26.741118 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 21:14:26.762770 master-0 kubenswrapper[31456]: I0312 21:14:26.762703 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 21:14:26.772169 master-0 kubenswrapper[31456]: I0312 21:14:26.772114 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 21:14:26.788521 master-0 kubenswrapper[31456]: I0312 21:14:26.788418 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 12 21:14:26.793220 master-0 kubenswrapper[31456]: I0312 21:14:26.793175 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 12 21:14:26.841504 master-0 kubenswrapper[31456]: I0312 21:14:26.841446 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 21:14:26.956714 master-0 kubenswrapper[31456]: I0312 21:14:26.956564 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 12 21:14:27.000531 master-0 kubenswrapper[31456]: I0312 21:14:27.000460 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-f2k7z" Mar 12 21:14:27.027405 master-0 kubenswrapper[31456]: I0312 21:14:27.027193 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-7875j" Mar 12 21:14:27.096280 master-0 kubenswrapper[31456]: I0312 21:14:27.096071 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-6n7kf9fsvodvc" Mar 12 21:14:27.125896 master-0 kubenswrapper[31456]: I0312 21:14:27.125038 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 21:14:27.178308 master-0 kubenswrapper[31456]: I0312 21:14:27.178233 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 21:14:27.195747 master-0 kubenswrapper[31456]: I0312 21:14:27.195616 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 21:14:27.232987 master-0 kubenswrapper[31456]: I0312 21:14:27.232786 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 12 21:14:27.238789 master-0 kubenswrapper[31456]: I0312 21:14:27.238742 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 21:14:27.273929 master-0 kubenswrapper[31456]: I0312 21:14:27.273835 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 12 21:14:27.300752 master-0 kubenswrapper[31456]: I0312 21:14:27.300698 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 12 21:14:27.325445 master-0 kubenswrapper[31456]: I0312 21:14:27.325376 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 21:14:27.347584 master-0 kubenswrapper[31456]: I0312 21:14:27.347498 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 12 21:14:27.400684 master-0 kubenswrapper[31456]: I0312 21:14:27.400579 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 12 21:14:27.443138 master-0 kubenswrapper[31456]: I0312 21:14:27.443035 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-7t6bk" Mar 12 21:14:27.467937 master-0 kubenswrapper[31456]: I0312 21:14:27.467871 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:27.475986 master-0 kubenswrapper[31456]: I0312 21:14:27.475929 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:27.529530 master-0 kubenswrapper[31456]: I0312 21:14:27.529251 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 21:14:27.543849 master-0 kubenswrapper[31456]: I0312 21:14:27.543768 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 21:14:27.616770 master-0 kubenswrapper[31456]: I0312 21:14:27.616714 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 21:14:27.710222 master-0 kubenswrapper[31456]: I0312 21:14:27.710069 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 21:14:27.791888 master-0 kubenswrapper[31456]: I0312 21:14:27.791629 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-r4pnh" Mar 12 21:14:27.825772 master-0 kubenswrapper[31456]: I0312 21:14:27.825701 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 12 21:14:27.846065 master-0 kubenswrapper[31456]: I0312 21:14:27.845924 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 21:14:28.128263 master-0 kubenswrapper[31456]: I0312 21:14:28.128169 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 21:14:28.137121 master-0 kubenswrapper[31456]: I0312 21:14:28.137053 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 12 21:14:28.157658 master-0 kubenswrapper[31456]: I0312 21:14:28.157408 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 21:14:28.164200 master-0 kubenswrapper[31456]: I0312 21:14:28.164123 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 12 21:14:28.235098 master-0 kubenswrapper[31456]: I0312 21:14:28.234992 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-qthpm" Mar 12 21:14:28.277712 master-0 kubenswrapper[31456]: I0312 21:14:28.277106 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 21:14:28.398801 master-0 kubenswrapper[31456]: I0312 21:14:28.398587 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-bxh97" Mar 12 21:14:28.432764 master-0 kubenswrapper[31456]: I0312 21:14:28.432686 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 12 21:14:28.446434 master-0 kubenswrapper[31456]: I0312 21:14:28.446406 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 21:14:28.459159 master-0 kubenswrapper[31456]: I0312 21:14:28.459110 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 21:14:28.508384 master-0 kubenswrapper[31456]: I0312 21:14:28.508309 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 12 21:14:28.611783 master-0 kubenswrapper[31456]: I0312 21:14:28.611700 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 12 21:14:28.612062 master-0 kubenswrapper[31456]: I0312 21:14:28.611959 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 12 21:14:28.656570 master-0 kubenswrapper[31456]: I0312 21:14:28.656429 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 21:14:28.664123 master-0 kubenswrapper[31456]: I0312 21:14:28.664067 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 12 21:14:28.689774 master-0 kubenswrapper[31456]: I0312 21:14:28.689702 31456 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 21:14:28.703161 master-0 kubenswrapper[31456]: I0312 21:14:28.703071 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-monitoring/metrics-server-5bbfd655db-2tsb8"] Mar 12 21:14:28.703345 master-0 kubenswrapper[31456]: I0312 21:14:28.703206 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 12 21:14:28.703345 master-0 kubenswrapper[31456]: I0312 21:14:28.703235 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-j2x97"] Mar 12 21:14:28.712798 master-0 kubenswrapper[31456]: I0312 21:14:28.712716 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 12 21:14:28.734923 master-0 kubenswrapper[31456]: I0312 21:14:28.734798 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=17.734779248 podStartE2EDuration="17.734779248s" podCreationTimestamp="2026-03-12 21:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:14:28.729612042 +0000 UTC m=+329.804217400" watchObservedRunningTime="2026-03-12 21:14:28.734779248 +0000 UTC m=+329.809384586" Mar 12 21:14:28.738775 master-0 kubenswrapper[31456]: I0312 21:14:28.738675 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-kj7kz" Mar 12 21:14:28.793234 master-0 kubenswrapper[31456]: I0312 21:14:28.793106 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 12 21:14:28.800999 master-0 kubenswrapper[31456]: I0312 21:14:28.800948 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 12 21:14:28.890482 master-0 kubenswrapper[31456]: I0312 21:14:28.890392 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 21:14:28.901660 master-0 kubenswrapper[31456]: I0312 21:14:28.901604 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 12 21:14:28.924533 master-0 kubenswrapper[31456]: I0312 21:14:28.924392 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 21:14:28.966920 master-0 kubenswrapper[31456]: I0312 21:14:28.966857 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 21:14:28.993251 master-0 kubenswrapper[31456]: I0312 21:14:28.993172 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 12 21:14:28.993385 master-0 kubenswrapper[31456]: I0312 21:14:28.993317 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 21:14:29.019446 master-0 kubenswrapper[31456]: I0312 21:14:29.019410 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 21:14:29.078801 master-0 kubenswrapper[31456]: I0312 21:14:29.078749 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-v7qw9" Mar 12 21:14:29.116419 master-0 kubenswrapper[31456]: I0312 21:14:29.116271 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 21:14:29.119234 master-0 kubenswrapper[31456]: I0312 21:14:29.119185 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 21:14:29.140207 master-0 kubenswrapper[31456]: I0312 21:14:29.140145 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 21:14:29.145847 master-0 kubenswrapper[31456]: I0312 21:14:29.145757 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 21:14:29.161902 master-0 kubenswrapper[31456]: I0312 21:14:29.161862 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 21:14:29.179399 master-0 kubenswrapper[31456]: I0312 21:14:29.179286 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33beea0b-f77b-4388-a9c8-5710f084f961" path="/var/lib/kubelet/pods/33beea0b-f77b-4388-a9c8-5710f084f961/volumes" Mar 12 21:14:29.221548 master-0 kubenswrapper[31456]: I0312 21:14:29.221481 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 21:14:29.259261 master-0 kubenswrapper[31456]: I0312 21:14:29.259207 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 21:14:29.261172 master-0 kubenswrapper[31456]: I0312 21:14:29.261122 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-n68ff" Mar 12 21:14:29.334027 master-0 kubenswrapper[31456]: I0312 21:14:29.333936 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 21:14:29.340873 master-0 kubenswrapper[31456]: I0312 21:14:29.340836 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 21:14:29.352782 master-0 kubenswrapper[31456]: I0312 21:14:29.352725 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 21:14:29.415614 master-0 kubenswrapper[31456]: I0312 21:14:29.415531 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 21:14:29.467030 master-0 kubenswrapper[31456]: I0312 21:14:29.466895 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 21:14:29.540657 master-0 kubenswrapper[31456]: I0312 21:14:29.540586 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 21:14:29.564655 master-0 kubenswrapper[31456]: I0312 21:14:29.564588 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 12 21:14:29.590886 master-0 kubenswrapper[31456]: I0312 21:14:29.584856 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 12 21:14:29.656433 master-0 kubenswrapper[31456]: I0312 21:14:29.656371 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 21:14:29.716167 master-0 kubenswrapper[31456]: I0312 21:14:29.716107 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 21:14:29.727303 master-0 kubenswrapper[31456]: I0312 21:14:29.727179 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 12 21:14:29.767375 master-0 kubenswrapper[31456]: I0312 21:14:29.767318 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 21:14:29.830012 master-0 kubenswrapper[31456]: I0312 21:14:29.829940 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 21:14:29.904000 master-0 kubenswrapper[31456]: I0312 21:14:29.903905 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 21:14:29.927834 master-0 kubenswrapper[31456]: I0312 21:14:29.927746 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 21:14:29.928982 master-0 kubenswrapper[31456]: I0312 21:14:29.928930 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 21:14:29.985930 master-0 kubenswrapper[31456]: I0312 21:14:29.983896 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 21:14:29.991016 master-0 kubenswrapper[31456]: I0312 21:14:29.990548 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 21:14:29.995965 master-0 kubenswrapper[31456]: I0312 21:14:29.995934 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 12 21:14:30.006599 master-0 kubenswrapper[31456]: I0312 21:14:30.006566 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 21:14:30.034506 master-0 kubenswrapper[31456]: I0312 21:14:30.034439 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 12 21:14:30.087739 master-0 kubenswrapper[31456]: I0312 21:14:30.087667 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 21:14:30.097114 master-0 kubenswrapper[31456]: I0312 21:14:30.097057 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 12 21:14:30.122215 master-0 kubenswrapper[31456]: I0312 21:14:30.122161 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 12 21:14:30.138050 master-0 kubenswrapper[31456]: I0312 21:14:30.137995 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 21:14:30.147285 master-0 kubenswrapper[31456]: I0312 21:14:30.147256 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 21:14:30.186941 master-0 kubenswrapper[31456]: I0312 21:14:30.186892 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fvjb30sfen171" Mar 12 21:14:30.230933 master-0 kubenswrapper[31456]: I0312 21:14:30.230874 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 21:14:30.399495 master-0 kubenswrapper[31456]: I0312 21:14:30.399438 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 21:14:30.452529 master-0 kubenswrapper[31456]: I0312 21:14:30.452476 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 21:14:30.462923 master-0 kubenswrapper[31456]: I0312 21:14:30.462864 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 12 21:14:30.614989 master-0 kubenswrapper[31456]: I0312 21:14:30.614945 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 12 21:14:30.616943 master-0 kubenswrapper[31456]: I0312 21:14:30.616924 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 21:14:30.677775 master-0 kubenswrapper[31456]: I0312 21:14:30.677699 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 12 21:14:30.710368 master-0 kubenswrapper[31456]: I0312 21:14:30.710312 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 21:14:30.717233 master-0 kubenswrapper[31456]: I0312 21:14:30.717180 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 12 21:14:30.748941 master-0 kubenswrapper[31456]: I0312 21:14:30.748883 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5j2qf" Mar 12 21:14:30.842011 master-0 kubenswrapper[31456]: I0312 21:14:30.841938 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 21:14:30.858425 master-0 kubenswrapper[31456]: I0312 21:14:30.858363 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:14:30.880438 master-0 kubenswrapper[31456]: I0312 21:14:30.880382 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 21:14:30.961736 master-0 kubenswrapper[31456]: I0312 21:14:30.961627 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 21:14:30.988418 master-0 kubenswrapper[31456]: I0312 21:14:30.988342 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 21:14:31.008975 master-0 kubenswrapper[31456]: I0312 21:14:31.008930 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 21:14:31.038746 master-0 kubenswrapper[31456]: I0312 21:14:31.038677 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 21:14:31.098220 master-0 kubenswrapper[31456]: I0312 21:14:31.098153 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 21:14:31.129470 master-0 kubenswrapper[31456]: I0312 21:14:31.129409 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 21:14:31.154512 master-0 kubenswrapper[31456]: I0312 21:14:31.154449 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-cdrqx" Mar 12 21:14:31.297729 master-0 kubenswrapper[31456]: I0312 21:14:31.297609 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 21:14:31.305466 master-0 kubenswrapper[31456]: I0312 21:14:31.305417 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 21:14:31.380380 master-0 kubenswrapper[31456]: I0312 21:14:31.380302 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 21:14:31.393861 master-0 kubenswrapper[31456]: I0312 21:14:31.393822 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vmm2r" Mar 12 21:14:31.409332 master-0 kubenswrapper[31456]: I0312 21:14:31.409297 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 12 21:14:31.410049 master-0 kubenswrapper[31456]: I0312 21:14:31.410011 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 21:14:31.430113 master-0 kubenswrapper[31456]: I0312 21:14:31.430036 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 21:14:31.445800 master-0 kubenswrapper[31456]: I0312 21:14:31.445762 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 12 21:14:31.464030 master-0 kubenswrapper[31456]: I0312 21:14:31.463985 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 21:14:31.530421 master-0 kubenswrapper[31456]: I0312 21:14:31.527862 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 21:14:31.577438 master-0 kubenswrapper[31456]: I0312 21:14:31.577254 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 12 21:14:31.581939 master-0 kubenswrapper[31456]: I0312 21:14:31.581892 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 21:14:31.639786 master-0 kubenswrapper[31456]: I0312 21:14:31.639730 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 12 21:14:31.650503 master-0 kubenswrapper[31456]: I0312 21:14:31.650456 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:14:31.685508 master-0 kubenswrapper[31456]: I0312 21:14:31.685403 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 21:14:31.709902 master-0 kubenswrapper[31456]: I0312 21:14:31.709851 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 21:14:31.884909 master-0 kubenswrapper[31456]: I0312 21:14:31.884718 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 21:14:31.939461 master-0 kubenswrapper[31456]: I0312 21:14:31.939430 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 21:14:32.013163 master-0 kubenswrapper[31456]: I0312 21:14:32.013084 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 21:14:32.029465 master-0 kubenswrapper[31456]: I0312 21:14:32.029428 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 21:14:32.055240 master-0 kubenswrapper[31456]: I0312 21:14:32.055217 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-zfxcx" Mar 12 21:14:32.056238 master-0 kubenswrapper[31456]: I0312 21:14:32.056182 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 21:14:32.056563 master-0 kubenswrapper[31456]: I0312 21:14:32.056519 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 21:14:32.151796 master-0 kubenswrapper[31456]: I0312 21:14:32.151673 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 21:14:32.199054 master-0 kubenswrapper[31456]: I0312 21:14:32.198682 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 21:14:32.243544 master-0 kubenswrapper[31456]: I0312 21:14:32.243488 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 21:14:32.321240 master-0 kubenswrapper[31456]: I0312 21:14:32.321167 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 21:14:32.333277 master-0 kubenswrapper[31456]: I0312 21:14:32.333220 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 21:14:32.338612 master-0 kubenswrapper[31456]: I0312 21:14:32.338553 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 21:14:32.389351 master-0 kubenswrapper[31456]: I0312 21:14:32.389280 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 21:14:32.460914 master-0 kubenswrapper[31456]: I0312 21:14:32.460724 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 21:14:32.461694 master-0 kubenswrapper[31456]: I0312 21:14:32.461643 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 21:14:32.462994 master-0 kubenswrapper[31456]: I0312 21:14:32.462945 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 21:14:32.524087 master-0 kubenswrapper[31456]: I0312 21:14:32.524028 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 12 21:14:32.556836 master-0 kubenswrapper[31456]: I0312 21:14:32.556768 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 21:14:32.580701 master-0 kubenswrapper[31456]: I0312 21:14:32.580653 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-p5qt4" Mar 12 21:14:32.580701 master-0 kubenswrapper[31456]: I0312 21:14:32.580663 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 21:14:32.636738 master-0 kubenswrapper[31456]: I0312 21:14:32.636679 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 21:14:32.719392 master-0 kubenswrapper[31456]: I0312 21:14:32.719261 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 21:14:32.753155 master-0 kubenswrapper[31456]: I0312 21:14:32.753094 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 21:14:32.775630 master-0 kubenswrapper[31456]: I0312 21:14:32.775594 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 21:14:32.805385 master-0 kubenswrapper[31456]: I0312 21:14:32.805331 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 12 21:14:32.832353 master-0 kubenswrapper[31456]: I0312 21:14:32.832303 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 21:14:32.837423 master-0 kubenswrapper[31456]: I0312 21:14:32.837400 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 21:14:32.925877 master-0 kubenswrapper[31456]: I0312 21:14:32.925800 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 21:14:32.965593 master-0 kubenswrapper[31456]: I0312 21:14:32.965547 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 21:14:32.971629 master-0 kubenswrapper[31456]: I0312 21:14:32.971559 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 21:14:33.041834 master-0 kubenswrapper[31456]: I0312 21:14:33.041748 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 21:14:33.049462 master-0 kubenswrapper[31456]: I0312 21:14:33.049421 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 21:14:33.086889 master-0 kubenswrapper[31456]: I0312 21:14:33.086838 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 21:14:33.152540 master-0 kubenswrapper[31456]: I0312 21:14:33.152476 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 21:14:33.184390 master-0 kubenswrapper[31456]: I0312 21:14:33.184317 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5m6kx" Mar 12 21:14:33.279162 master-0 kubenswrapper[31456]: I0312 21:14:33.279028 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 12 21:14:33.351322 master-0 kubenswrapper[31456]: I0312 21:14:33.351254 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 12 21:14:33.378986 master-0 kubenswrapper[31456]: I0312 21:14:33.378925 31456 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 21:14:33.406950 master-0 kubenswrapper[31456]: I0312 21:14:33.406896 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:14:33.433697 master-0 kubenswrapper[31456]: I0312 21:14:33.433645 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 12 21:14:33.577122 master-0 kubenswrapper[31456]: I0312 21:14:33.576895 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 12 21:14:33.678942 master-0 kubenswrapper[31456]: I0312 21:14:33.678709 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 21:14:33.790675 master-0 kubenswrapper[31456]: I0312 21:14:33.790637 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 12 21:14:33.897987 master-0 kubenswrapper[31456]: I0312 21:14:33.896385 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 21:14:33.907612 master-0 kubenswrapper[31456]: I0312 21:14:33.907588 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 12 21:14:33.962650 master-0 kubenswrapper[31456]: I0312 21:14:33.962561 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 21:14:33.965650 master-0 kubenswrapper[31456]: I0312 21:14:33.965571 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 21:14:33.990696 master-0 kubenswrapper[31456]: I0312 21:14:33.990554 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 21:14:34.104853 master-0 kubenswrapper[31456]: I0312 21:14:34.104760 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 12 21:14:34.143429 master-0 kubenswrapper[31456]: I0312 21:14:34.143335 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 21:14:34.228275 master-0 kubenswrapper[31456]: I0312 21:14:34.228042 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 12 21:14:34.245637 master-0 kubenswrapper[31456]: I0312 21:14:34.245521 31456 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 12 21:14:34.246299 master-0 kubenswrapper[31456]: I0312 21:14:34.245918 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" containerID="cri-o://f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b" gracePeriod=5 Mar 12 21:14:34.346048 master-0 kubenswrapper[31456]: I0312 21:14:34.342413 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vr86d" Mar 12 21:14:34.346048 master-0 kubenswrapper[31456]: I0312 21:14:34.343679 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 21:14:34.504660 master-0 kubenswrapper[31456]: I0312 21:14:34.504505 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 21:14:34.525551 master-0 kubenswrapper[31456]: I0312 21:14:34.525413 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 21:14:34.536432 master-0 kubenswrapper[31456]: I0312 21:14:34.536397 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 21:14:34.538388 master-0 kubenswrapper[31456]: I0312 21:14:34.537371 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 12 21:14:34.575880 master-0 kubenswrapper[31456]: I0312 21:14:34.575802 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 21:14:34.627512 master-0 kubenswrapper[31456]: I0312 21:14:34.627433 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 21:14:34.675863 master-0 kubenswrapper[31456]: I0312 21:14:34.675796 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 21:14:34.923541 master-0 kubenswrapper[31456]: I0312 21:14:34.921380 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 12 21:14:34.939070 master-0 kubenswrapper[31456]: I0312 21:14:34.939025 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 21:14:35.068754 master-0 kubenswrapper[31456]: I0312 21:14:35.068516 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 21:14:35.219177 master-0 kubenswrapper[31456]: I0312 21:14:35.218861 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 21:14:35.236033 master-0 kubenswrapper[31456]: I0312 21:14:35.235320 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 21:14:35.276706 master-0 kubenswrapper[31456]: I0312 21:14:35.276591 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 12 21:14:35.482318 master-0 kubenswrapper[31456]: I0312 21:14:35.482033 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 21:14:35.537831 master-0 kubenswrapper[31456]: I0312 21:14:35.532627 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 21:14:35.751186 master-0 kubenswrapper[31456]: I0312 21:14:35.751032 31456 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 21:14:35.768613 master-0 kubenswrapper[31456]: I0312 21:14:35.768566 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-qh6sj" Mar 12 21:14:35.793539 master-0 kubenswrapper[31456]: I0312 21:14:35.793438 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 21:14:35.831502 master-0 kubenswrapper[31456]: I0312 21:14:35.831154 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 21:14:35.839167 master-0 kubenswrapper[31456]: I0312 21:14:35.839131 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 21:14:35.904373 master-0 kubenswrapper[31456]: I0312 21:14:35.904306 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-t5dxh" Mar 12 21:14:35.912919 master-0 kubenswrapper[31456]: I0312 21:14:35.912882 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 21:14:35.936131 master-0 kubenswrapper[31456]: I0312 21:14:35.936032 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 21:14:35.938703 master-0 kubenswrapper[31456]: I0312 21:14:35.938680 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 12 21:14:35.977282 master-0 kubenswrapper[31456]: I0312 21:14:35.977238 31456 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 21:14:36.085704 master-0 kubenswrapper[31456]: I0312 21:14:36.085652 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 12 21:14:36.125350 master-0 kubenswrapper[31456]: I0312 21:14:36.125300 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 21:14:36.166321 master-0 kubenswrapper[31456]: I0312 21:14:36.166246 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 12 21:14:36.189660 master-0 kubenswrapper[31456]: I0312 21:14:36.189605 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-lrwqt" Mar 12 21:14:36.302134 master-0 kubenswrapper[31456]: I0312 21:14:36.302042 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 21:14:36.325492 master-0 kubenswrapper[31456]: I0312 21:14:36.325453 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 12 21:14:36.738889 master-0 kubenswrapper[31456]: I0312 21:14:36.738824 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 12 21:14:36.754264 master-0 kubenswrapper[31456]: I0312 21:14:36.754188 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-7gthf" Mar 12 21:14:36.976615 master-0 kubenswrapper[31456]: I0312 21:14:36.975139 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 21:14:37.058041 master-0 kubenswrapper[31456]: I0312 21:14:37.053730 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 21:14:37.087315 master-0 kubenswrapper[31456]: I0312 21:14:37.087250 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 21:14:37.129297 master-0 kubenswrapper[31456]: I0312 21:14:37.129235 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 12 21:14:37.297739 master-0 kubenswrapper[31456]: I0312 21:14:37.297650 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 12 21:14:37.327035 master-0 kubenswrapper[31456]: I0312 21:14:37.326981 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 21:14:37.539903 master-0 kubenswrapper[31456]: I0312 21:14:37.539627 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 21:14:37.729270 master-0 kubenswrapper[31456]: I0312 21:14:37.728400 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 21:14:37.731269 master-0 kubenswrapper[31456]: I0312 21:14:37.731189 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 21:14:37.731418 master-0 kubenswrapper[31456]: I0312 21:14:37.731354 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 21:14:37.759195 master-0 kubenswrapper[31456]: I0312 21:14:37.759131 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-74bpcql1t9em9" Mar 12 21:14:45.389646 master-0 kubenswrapper[31456]: I0312 21:14:45.389594 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 12 21:14:47.300859 master-0 kubenswrapper[31456]: I0312 21:14:47.300611 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 21:14:51.299900 master-0 kubenswrapper[31456]: I0312 21:14:51.299864 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 12 21:14:51.300422 master-0 kubenswrapper[31456]: I0312 21:14:51.299939 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:14:51.475713 master-0 kubenswrapper[31456]: I0312 21:14:51.475643 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 21:14:51.475858 master-0 kubenswrapper[31456]: I0312 21:14:51.475791 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 21:14:51.475920 master-0 kubenswrapper[31456]: I0312 21:14:51.475862 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.475992 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476024 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476033 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476084 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log" (OuterVolumeSpecName: "var-log") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476102 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476127 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests" (OuterVolumeSpecName: "manifests") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476375 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476393 31456 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476407 31456 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:51.476439 master-0 kubenswrapper[31456]: I0312 21:14:51.476419 31456 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:51.486892 master-0 kubenswrapper[31456]: I0312 21:14:51.486788 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:14:51.578754 master-0 kubenswrapper[31456]: I0312 21:14:51.578676 31456 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:14:52.006981 master-0 kubenswrapper[31456]: I0312 21:14:52.006396 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-j2x97" event={"ID":"47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46","Type":"ContainerStarted","Data":"fccb31b1c6ca54370a28674dc12be8f9865017160249d0c59c54b3869f527ca6"} Mar 12 21:14:52.007265 master-0 kubenswrapper[31456]: I0312 21:14:52.007068 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:52.009871 master-0 kubenswrapper[31456]: I0312 21:14:52.009783 31456 patch_prober.go:28] interesting pod/downloads-84f57b9877-j2x97 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" start-of-body= Mar 12 21:14:52.010010 master-0 kubenswrapper[31456]: I0312 21:14:52.009900 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-j2x97" podUID="47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" Mar 12 21:14:52.013580 master-0 kubenswrapper[31456]: I0312 21:14:52.013524 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 12 21:14:52.013707 master-0 kubenswrapper[31456]: I0312 21:14:52.013607 31456 generic.go:334] "Generic (PLEG): container finished" podID="a814bd60de133d95cf99630a978c017e" containerID="f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b" exitCode=137 Mar 12 21:14:52.013707 master-0 kubenswrapper[31456]: I0312 21:14:52.013664 31456 scope.go:117] "RemoveContainer" containerID="f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b" Mar 12 21:14:52.013867 master-0 kubenswrapper[31456]: I0312 21:14:52.013731 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 12 21:14:52.036399 master-0 kubenswrapper[31456]: I0312 21:14:52.036328 31456 scope.go:117] "RemoveContainer" containerID="f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b" Mar 12 21:14:52.037025 master-0 kubenswrapper[31456]: E0312 21:14:52.036958 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b\": container with ID starting with f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b not found: ID does not exist" containerID="f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b" Mar 12 21:14:52.037162 master-0 kubenswrapper[31456]: I0312 21:14:52.037012 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b"} err="failed to get container status \"f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b\": rpc error: code = NotFound desc = could not find container \"f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b\": container with ID starting with f3267c01a27c8f33d70e730907d70fecb449ec2951ac639e8c26e54233f1839b not found: ID does not exist" Mar 12 21:14:53.040648 master-0 kubenswrapper[31456]: I0312 21:14:53.040568 31456 patch_prober.go:28] interesting pod/downloads-84f57b9877-j2x97 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" start-of-body= Mar 12 21:14:53.040648 master-0 kubenswrapper[31456]: I0312 21:14:53.040632 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-j2x97" podUID="47fc5bc0-b234-48bb-b8f8-5ef5e56f1a46" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" Mar 12 21:14:53.042310 master-0 kubenswrapper[31456]: I0312 21:14:53.042220 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-j2x97" podStartSLOduration=33.228026838 podStartE2EDuration="1m7.042202694s" podCreationTimestamp="2026-03-12 21:13:46 +0000 UTC" firstStartedPulling="2026-03-12 21:14:17.650995153 +0000 UTC m=+318.725600521" lastFinishedPulling="2026-03-12 21:14:51.465171009 +0000 UTC m=+352.539776377" observedRunningTime="2026-03-12 21:14:53.037888939 +0000 UTC m=+354.112494277" watchObservedRunningTime="2026-03-12 21:14:53.042202694 +0000 UTC m=+354.116808032" Mar 12 21:14:53.180306 master-0 kubenswrapper[31456]: I0312 21:14:53.180236 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a814bd60de133d95cf99630a978c017e" path="/var/lib/kubelet/pods/a814bd60de133d95cf99630a978c017e/volumes" Mar 12 21:14:54.546629 master-0 kubenswrapper[31456]: I0312 21:14:54.546322 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 12 21:14:55.245543 master-0 kubenswrapper[31456]: I0312 21:14:55.245451 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 21:14:56.412750 master-0 kubenswrapper[31456]: I0312 21:14:56.412662 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 21:14:56.640677 master-0 kubenswrapper[31456]: I0312 21:14:56.640585 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-62zgv" Mar 12 21:14:57.205711 master-0 kubenswrapper[31456]: I0312 21:14:57.205611 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-j2x97" Mar 12 21:14:57.556787 master-0 kubenswrapper[31456]: I0312 21:14:57.556578 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 21:14:58.532106 master-0 kubenswrapper[31456]: I0312 21:14:58.532004 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 21:14:58.953449 master-0 kubenswrapper[31456]: I0312 21:14:58.953335 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 12 21:14:59.211901 master-0 kubenswrapper[31456]: I0312 21:14:59.211628 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 21:15:00.333006 master-0 kubenswrapper[31456]: I0312 21:15:00.332923 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 21:15:01.109447 master-0 kubenswrapper[31456]: I0312 21:15:01.109348 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 21:15:01.134296 master-0 kubenswrapper[31456]: I0312 21:15:01.134201 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-pvnjq" Mar 12 21:15:02.683346 master-0 kubenswrapper[31456]: I0312 21:15:02.683266 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 21:15:02.831955 master-0 kubenswrapper[31456]: I0312 21:15:02.831856 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 21:15:04.902142 master-0 kubenswrapper[31456]: I0312 21:15:04.902033 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 21:15:05.324538 master-0 kubenswrapper[31456]: I0312 21:15:05.324460 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 21:15:07.044650 master-0 kubenswrapper[31456]: I0312 21:15:07.044566 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 21:15:07.610033 master-0 kubenswrapper[31456]: I0312 21:15:07.609905 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 21:15:08.112349 master-0 kubenswrapper[31456]: I0312 21:15:08.112272 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 12 21:15:08.722942 master-0 kubenswrapper[31456]: I0312 21:15:08.722836 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-ct6dn" Mar 12 21:15:09.704320 master-0 kubenswrapper[31456]: I0312 21:15:09.704241 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 12 21:15:09.719793 master-0 kubenswrapper[31456]: I0312 21:15:09.719712 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 21:15:09.761939 master-0 kubenswrapper[31456]: I0312 21:15:09.754530 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-bk87n" Mar 12 21:15:09.811968 master-0 kubenswrapper[31456]: I0312 21:15:09.810250 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-w9pdx" Mar 12 21:15:09.851124 master-0 kubenswrapper[31456]: I0312 21:15:09.851065 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 21:15:11.582243 master-0 kubenswrapper[31456]: I0312 21:15:11.582150 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 21:15:12.479716 master-0 kubenswrapper[31456]: I0312 21:15:12.479657 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-xjkth" Mar 12 21:15:16.064756 master-0 kubenswrapper[31456]: I0312 21:15:16.064704 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mc5vw" Mar 12 21:15:17.712288 master-0 kubenswrapper[31456]: I0312 21:15:17.712209 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 21:15:21.539695 master-0 kubenswrapper[31456]: I0312 21:15:21.539601 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-xgssr" Mar 12 21:15:56.836188 master-0 kubenswrapper[31456]: I0312 21:15:56.836094 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fff565898-x9jfv"] Mar 12 21:15:56.836908 master-0 kubenswrapper[31456]: E0312 21:15:56.836636 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 21:15:56.836908 master-0 kubenswrapper[31456]: I0312 21:15:56.836671 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 21:15:56.836908 master-0 kubenswrapper[31456]: E0312 21:15:56.836719 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33beea0b-f77b-4388-a9c8-5710f084f961" containerName="metrics-server" Mar 12 21:15:56.836908 master-0 kubenswrapper[31456]: I0312 21:15:56.836732 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="33beea0b-f77b-4388-a9c8-5710f084f961" containerName="metrics-server" Mar 12 21:15:56.836908 master-0 kubenswrapper[31456]: E0312 21:15:56.836777 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" containerName="installer" Mar 12 21:15:56.836908 master-0 kubenswrapper[31456]: I0312 21:15:56.836790 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" containerName="installer" Mar 12 21:15:56.837164 master-0 kubenswrapper[31456]: I0312 21:15:56.837064 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58a6a80-48e7-428e-be7a-d81dfc726450" containerName="installer" Mar 12 21:15:56.837164 master-0 kubenswrapper[31456]: I0312 21:15:56.837106 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 12 21:15:56.837164 master-0 kubenswrapper[31456]: I0312 21:15:56.837136 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="33beea0b-f77b-4388-a9c8-5710f084f961" containerName="metrics-server" Mar 12 21:15:56.837923 master-0 kubenswrapper[31456]: I0312 21:15:56.837885 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.841328 master-0 kubenswrapper[31456]: I0312 21:15:56.841276 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-8hkxz" Mar 12 21:15:56.841614 master-0 kubenswrapper[31456]: I0312 21:15:56.841384 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 21:15:56.842595 master-0 kubenswrapper[31456]: I0312 21:15:56.842565 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 21:15:56.842866 master-0 kubenswrapper[31456]: I0312 21:15:56.842830 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 21:15:56.844563 master-0 kubenswrapper[31456]: I0312 21:15:56.844531 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 21:15:56.848886 master-0 kubenswrapper[31456]: I0312 21:15:56.848845 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 21:15:56.866087 master-0 kubenswrapper[31456]: I0312 21:15:56.866016 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fff565898-x9jfv"] Mar 12 21:15:56.869466 master-0 kubenswrapper[31456]: I0312 21:15:56.869410 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891022 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4bw\" (UniqueName: \"kubernetes.io/projected/a3fe72db-905f-487a-a343-295bce31e19e-kube-api-access-5r4bw\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891073 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-service-ca\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891090 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-trusted-ca-bundle\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891112 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-oauth-serving-cert\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891149 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-oauth-config\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891181 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-serving-cert\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.891234 master-0 kubenswrapper[31456]: I0312 21:15:56.891198 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-console-config\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.992426 master-0 kubenswrapper[31456]: I0312 21:15:56.992361 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4bw\" (UniqueName: \"kubernetes.io/projected/a3fe72db-905f-487a-a343-295bce31e19e-kube-api-access-5r4bw\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.992426 master-0 kubenswrapper[31456]: I0312 21:15:56.992411 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-service-ca\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.992426 master-0 kubenswrapper[31456]: I0312 21:15:56.992429 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-trusted-ca-bundle\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.992772 master-0 kubenswrapper[31456]: I0312 21:15:56.992454 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-oauth-serving-cert\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.992991 master-0 kubenswrapper[31456]: I0312 21:15:56.992950 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-oauth-config\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.993216 master-0 kubenswrapper[31456]: I0312 21:15:56.993195 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-serving-cert\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.993340 master-0 kubenswrapper[31456]: I0312 21:15:56.993322 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-console-config\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.993844 master-0 kubenswrapper[31456]: I0312 21:15:56.993784 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-oauth-serving-cert\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.994515 master-0 kubenswrapper[31456]: I0312 21:15:56.994477 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-console-config\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.994650 master-0 kubenswrapper[31456]: I0312 21:15:56.994591 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-service-ca\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.995325 master-0 kubenswrapper[31456]: I0312 21:15:56.995275 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-trusted-ca-bundle\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.996067 master-0 kubenswrapper[31456]: I0312 21:15:56.996031 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-serving-cert\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:56.997285 master-0 kubenswrapper[31456]: I0312 21:15:56.997245 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-oauth-config\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:57.008211 master-0 kubenswrapper[31456]: I0312 21:15:57.008167 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4bw\" (UniqueName: \"kubernetes.io/projected/a3fe72db-905f-487a-a343-295bce31e19e-kube-api-access-5r4bw\") pod \"console-6fff565898-x9jfv\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:57.160716 master-0 kubenswrapper[31456]: I0312 21:15:57.160613 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:15:57.648480 master-0 kubenswrapper[31456]: W0312 21:15:57.648419 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3fe72db_905f_487a_a343_295bce31e19e.slice/crio-334c23c01184798bb989b60dd7b0e97509ae235a2ccfcebfe031c1912ca4d815 WatchSource:0}: Error finding container 334c23c01184798bb989b60dd7b0e97509ae235a2ccfcebfe031c1912ca4d815: Status 404 returned error can't find the container with id 334c23c01184798bb989b60dd7b0e97509ae235a2ccfcebfe031c1912ca4d815 Mar 12 21:15:57.651490 master-0 kubenswrapper[31456]: I0312 21:15:57.651465 31456 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 21:15:57.653066 master-0 kubenswrapper[31456]: I0312 21:15:57.653032 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fff565898-x9jfv"] Mar 12 21:15:57.706088 master-0 kubenswrapper[31456]: I0312 21:15:57.705391 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fff565898-x9jfv" event={"ID":"a3fe72db-905f-487a-a343-295bce31e19e","Type":"ContainerStarted","Data":"334c23c01184798bb989b60dd7b0e97509ae235a2ccfcebfe031c1912ca4d815"} Mar 12 21:16:02.758956 master-0 kubenswrapper[31456]: I0312 21:16:02.758728 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fff565898-x9jfv" event={"ID":"a3fe72db-905f-487a-a343-295bce31e19e","Type":"ContainerStarted","Data":"15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda"} Mar 12 21:16:02.799030 master-0 kubenswrapper[31456]: I0312 21:16:02.798880 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fff565898-x9jfv" podStartSLOduration=2.194340404 podStartE2EDuration="6.79884703s" podCreationTimestamp="2026-03-12 21:15:56 +0000 UTC" firstStartedPulling="2026-03-12 21:15:57.651405608 +0000 UTC m=+418.726010946" lastFinishedPulling="2026-03-12 21:16:02.255912244 +0000 UTC m=+423.330517572" observedRunningTime="2026-03-12 21:16:02.78898764 +0000 UTC m=+423.863593008" watchObservedRunningTime="2026-03-12 21:16:02.79884703 +0000 UTC m=+423.873452398" Mar 12 21:16:04.520135 master-0 kubenswrapper[31456]: I0312 21:16:04.520053 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:16:04.520790 master-0 kubenswrapper[31456]: I0312 21:16:04.520687 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-metric" containerID="cri-o://da24a5560c15bfee8ffdf7a4acad8f836842312957495c1f48a1070c34da3077" gracePeriod=120 Mar 12 21:16:04.521281 master-0 kubenswrapper[31456]: I0312 21:16:04.520888 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="prom-label-proxy" containerID="cri-o://c4ed0960cf9bc2557dc0e5df8af9003d82bfa6fb1a701198446a2c35d692525b" gracePeriod=120 Mar 12 21:16:04.521281 master-0 kubenswrapper[31456]: I0312 21:16:04.521070 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="config-reloader" containerID="cri-o://880d7627641637fe5690f2cb679214e1b7fa5c600afc231ae075e4f697a24048" gracePeriod=120 Mar 12 21:16:04.521281 master-0 kubenswrapper[31456]: I0312 21:16:04.521018 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-web" containerID="cri-o://ad0441949003a38500f5ae34066530abfc6fc47dcf400d66fda34d620bf71c3c" gracePeriod=120 Mar 12 21:16:04.521281 master-0 kubenswrapper[31456]: I0312 21:16:04.521128 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy" containerID="cri-o://aba40a7cf66ca44db97861ee95162afacf7ae3a9ad8a925702f2cde614084862" gracePeriod=120 Mar 12 21:16:04.521281 master-0 kubenswrapper[31456]: I0312 21:16:04.521160 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="alertmanager" containerID="cri-o://847509df23dc5f0cd65487a561c834039e5719dbd9aadb73ca1712a834ccf8ce" gracePeriod=120 Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.776964 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="c4ed0960cf9bc2557dc0e5df8af9003d82bfa6fb1a701198446a2c35d692525b" exitCode=0 Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.777000 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="aba40a7cf66ca44db97861ee95162afacf7ae3a9ad8a925702f2cde614084862" exitCode=0 Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.777007 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="880d7627641637fe5690f2cb679214e1b7fa5c600afc231ae075e4f697a24048" exitCode=0 Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.777014 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="847509df23dc5f0cd65487a561c834039e5719dbd9aadb73ca1712a834ccf8ce" exitCode=0 Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.777035 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"c4ed0960cf9bc2557dc0e5df8af9003d82bfa6fb1a701198446a2c35d692525b"} Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.777073 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"aba40a7cf66ca44db97861ee95162afacf7ae3a9ad8a925702f2cde614084862"} Mar 12 21:16:04.777084 master-0 kubenswrapper[31456]: I0312 21:16:04.777087 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"880d7627641637fe5690f2cb679214e1b7fa5c600afc231ae075e4f697a24048"} Mar 12 21:16:04.777434 master-0 kubenswrapper[31456]: I0312 21:16:04.777102 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"847509df23dc5f0cd65487a561c834039e5719dbd9aadb73ca1712a834ccf8ce"} Mar 12 21:16:05.798889 master-0 kubenswrapper[31456]: I0312 21:16:05.797387 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="da24a5560c15bfee8ffdf7a4acad8f836842312957495c1f48a1070c34da3077" exitCode=0 Mar 12 21:16:05.798889 master-0 kubenswrapper[31456]: I0312 21:16:05.797423 31456 generic.go:334] "Generic (PLEG): container finished" podID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerID="ad0441949003a38500f5ae34066530abfc6fc47dcf400d66fda34d620bf71c3c" exitCode=0 Mar 12 21:16:05.798889 master-0 kubenswrapper[31456]: I0312 21:16:05.797444 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"da24a5560c15bfee8ffdf7a4acad8f836842312957495c1f48a1070c34da3077"} Mar 12 21:16:05.798889 master-0 kubenswrapper[31456]: I0312 21:16:05.797468 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"ad0441949003a38500f5ae34066530abfc6fc47dcf400d66fda34d620bf71c3c"} Mar 12 21:16:06.068052 master-0 kubenswrapper[31456]: I0312 21:16:06.068003 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.149161 master-0 kubenswrapper[31456]: I0312 21:16:06.149109 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-volume\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.149492 master-0 kubenswrapper[31456]: I0312 21:16:06.149465 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-web-config\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.149681 master-0 kubenswrapper[31456]: I0312 21:16:06.149655 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzk5p\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-kube-api-access-gzk5p\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.149912 master-0 kubenswrapper[31456]: I0312 21:16:06.149878 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.150097 master-0 kubenswrapper[31456]: I0312 21:16:06.150069 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.150281 master-0 kubenswrapper[31456]: I0312 21:16:06.150248 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-metrics-client-ca\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.150454 master-0 kubenswrapper[31456]: I0312 21:16:06.150429 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-main-db\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.150644 master-0 kubenswrapper[31456]: I0312 21:16:06.150617 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.152085 master-0 kubenswrapper[31456]: I0312 21:16:06.151851 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:06.152373 master-0 kubenswrapper[31456]: I0312 21:16:06.152335 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-out\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.152717 master-0 kubenswrapper[31456]: I0312 21:16:06.152678 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-web\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.153162 master-0 kubenswrapper[31456]: I0312 21:16:06.153121 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-trusted-ca-bundle\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.153448 master-0 kubenswrapper[31456]: I0312 21:16:06.153412 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-tls-assets\") pod \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\" (UID: \"c3679eeb-ec01-49e3-9049-faf3f0235ea0\") " Mar 12 21:16:06.153895 master-0 kubenswrapper[31456]: I0312 21:16:06.153861 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:06.154561 master-0 kubenswrapper[31456]: I0312 21:16:06.154513 31456 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.155799 master-0 kubenswrapper[31456]: I0312 21:16:06.155759 31456 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c3679eeb-ec01-49e3-9049-faf3f0235ea0-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.159956 master-0 kubenswrapper[31456]: I0312 21:16:06.154980 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:06.160260 master-0 kubenswrapper[31456]: I0312 21:16:06.156442 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:16:06.160389 master-0 kubenswrapper[31456]: I0312 21:16:06.157909 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:06.160510 master-0 kubenswrapper[31456]: I0312 21:16:06.159889 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-out" (OuterVolumeSpecName: "config-out") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:16:06.160634 master-0 kubenswrapper[31456]: I0312 21:16:06.160003 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:16:06.160961 master-0 kubenswrapper[31456]: I0312 21:16:06.160122 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:06.167180 master-0 kubenswrapper[31456]: I0312 21:16:06.167029 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:06.167180 master-0 kubenswrapper[31456]: I0312 21:16:06.167084 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:06.167180 master-0 kubenswrapper[31456]: I0312 21:16:06.167073 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-kube-api-access-gzk5p" (OuterVolumeSpecName: "kube-api-access-gzk5p") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "kube-api-access-gzk5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:16:06.216264 master-0 kubenswrapper[31456]: I0312 21:16:06.216207 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-web-config" (OuterVolumeSpecName: "web-config") pod "c3679eeb-ec01-49e3-9049-faf3f0235ea0" (UID: "c3679eeb-ec01-49e3-9049-faf3f0235ea0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257037 31456 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257064 31456 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-web-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257075 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzk5p\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-kube-api-access-gzk5p\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257086 31456 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257095 31456 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257107 31456 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257127 master-0 kubenswrapper[31456]: I0312 21:16:06.257116 31456 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257547 master-0 kubenswrapper[31456]: I0312 21:16:06.257147 31456 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c3679eeb-ec01-49e3-9049-faf3f0235ea0-config-out\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257547 master-0 kubenswrapper[31456]: I0312 21:16:06.257157 31456 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c3679eeb-ec01-49e3-9049-faf3f0235ea0-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.257547 master-0 kubenswrapper[31456]: I0312 21:16:06.257166 31456 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c3679eeb-ec01-49e3-9049-faf3f0235ea0-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:06.814623 master-0 kubenswrapper[31456]: I0312 21:16:06.814550 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c3679eeb-ec01-49e3-9049-faf3f0235ea0","Type":"ContainerDied","Data":"0721a9f0f4a2cf837622984b433d4b7055403c71a199e65fcd75b5a697481acb"} Mar 12 21:16:06.815353 master-0 kubenswrapper[31456]: I0312 21:16:06.814644 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.815484 master-0 kubenswrapper[31456]: I0312 21:16:06.815310 31456 scope.go:117] "RemoveContainer" containerID="c4ed0960cf9bc2557dc0e5df8af9003d82bfa6fb1a701198446a2c35d692525b" Mar 12 21:16:06.844520 master-0 kubenswrapper[31456]: I0312 21:16:06.844457 31456 scope.go:117] "RemoveContainer" containerID="da24a5560c15bfee8ffdf7a4acad8f836842312957495c1f48a1070c34da3077" Mar 12 21:16:06.873933 master-0 kubenswrapper[31456]: I0312 21:16:06.872753 31456 scope.go:117] "RemoveContainer" containerID="aba40a7cf66ca44db97861ee95162afacf7ae3a9ad8a925702f2cde614084862" Mar 12 21:16:06.896581 master-0 kubenswrapper[31456]: I0312 21:16:06.895708 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:16:06.899065 master-0 kubenswrapper[31456]: I0312 21:16:06.899026 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:16:06.907402 master-0 kubenswrapper[31456]: I0312 21:16:06.907319 31456 scope.go:117] "RemoveContainer" containerID="ad0441949003a38500f5ae34066530abfc6fc47dcf400d66fda34d620bf71c3c" Mar 12 21:16:06.933275 master-0 kubenswrapper[31456]: I0312 21:16:06.933204 31456 scope.go:117] "RemoveContainer" containerID="880d7627641637fe5690f2cb679214e1b7fa5c600afc231ae075e4f697a24048" Mar 12 21:16:06.947771 master-0 kubenswrapper[31456]: I0312 21:16:06.947695 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:16:06.948104 master-0 kubenswrapper[31456]: E0312 21:16:06.948068 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="init-config-reloader" Mar 12 21:16:06.948104 master-0 kubenswrapper[31456]: I0312 21:16:06.948088 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="init-config-reloader" Mar 12 21:16:06.948104 master-0 kubenswrapper[31456]: E0312 21:16:06.948106 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="alertmanager" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: I0312 21:16:06.948115 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="alertmanager" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: E0312 21:16:06.948152 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="prom-label-proxy" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: I0312 21:16:06.948158 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="prom-label-proxy" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: E0312 21:16:06.948175 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-web" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: I0312 21:16:06.948180 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-web" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: E0312 21:16:06.948192 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="config-reloader" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: I0312 21:16:06.948218 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="config-reloader" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: E0312 21:16:06.948226 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-metric" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: I0312 21:16:06.948231 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-metric" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: E0312 21:16:06.948242 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy" Mar 12 21:16:06.948318 master-0 kubenswrapper[31456]: I0312 21:16:06.948248 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy" Mar 12 21:16:06.948976 master-0 kubenswrapper[31456]: I0312 21:16:06.948430 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="prom-label-proxy" Mar 12 21:16:06.948976 master-0 kubenswrapper[31456]: I0312 21:16:06.948471 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="config-reloader" Mar 12 21:16:06.948976 master-0 kubenswrapper[31456]: I0312 21:16:06.948488 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-web" Mar 12 21:16:06.948976 master-0 kubenswrapper[31456]: I0312 21:16:06.948503 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy" Mar 12 21:16:06.948976 master-0 kubenswrapper[31456]: I0312 21:16:06.948537 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="alertmanager" Mar 12 21:16:06.948976 master-0 kubenswrapper[31456]: I0312 21:16:06.948549 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" containerName="kube-rbac-proxy-metric" Mar 12 21:16:06.950894 master-0 kubenswrapper[31456]: I0312 21:16:06.950854 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.954103 master-0 kubenswrapper[31456]: I0312 21:16:06.954042 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 12 21:16:06.954318 master-0 kubenswrapper[31456]: I0312 21:16:06.954276 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 12 21:16:06.954409 master-0 kubenswrapper[31456]: I0312 21:16:06.954351 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 12 21:16:06.954623 master-0 kubenswrapper[31456]: I0312 21:16:06.954580 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 12 21:16:06.954781 master-0 kubenswrapper[31456]: I0312 21:16:06.954055 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 12 21:16:06.955357 master-0 kubenswrapper[31456]: I0312 21:16:06.955231 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 12 21:16:06.962610 master-0 kubenswrapper[31456]: I0312 21:16:06.962542 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 12 21:16:06.966384 master-0 kubenswrapper[31456]: I0312 21:16:06.966321 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 12 21:16:06.967676 master-0 kubenswrapper[31456]: I0312 21:16:06.967616 31456 scope.go:117] "RemoveContainer" containerID="847509df23dc5f0cd65487a561c834039e5719dbd9aadb73ca1712a834ccf8ce" Mar 12 21:16:06.973380 master-0 kubenswrapper[31456]: I0312 21:16:06.973317 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.973380 master-0 kubenswrapper[31456]: I0312 21:16:06.973372 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45bada68-53fe-4807-b923-e12f2a471870-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.973598 master-0 kubenswrapper[31456]: I0312 21:16:06.973431 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45bada68-53fe-4807-b923-e12f2a471870-config-out\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.973598 master-0 kubenswrapper[31456]: I0312 21:16:06.973469 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.973598 master-0 kubenswrapper[31456]: I0312 21:16:06.973499 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-config-volume\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.973598 master-0 kubenswrapper[31456]: I0312 21:16:06.973536 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllw7\" (UniqueName: \"kubernetes.io/projected/45bada68-53fe-4807-b923-e12f2a471870-kube-api-access-sllw7\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.973598 master-0 kubenswrapper[31456]: I0312 21:16:06.973557 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.974143 master-0 kubenswrapper[31456]: I0312 21:16:06.973653 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.974143 master-0 kubenswrapper[31456]: I0312 21:16:06.973685 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/45bada68-53fe-4807-b923-e12f2a471870-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.974143 master-0 kubenswrapper[31456]: I0312 21:16:06.973957 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45bada68-53fe-4807-b923-e12f2a471870-tls-assets\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.974143 master-0 kubenswrapper[31456]: I0312 21:16:06.974006 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bada68-53fe-4807-b923-e12f2a471870-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.974143 master-0 kubenswrapper[31456]: I0312 21:16:06.974047 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-web-config\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:06.991800 master-0 kubenswrapper[31456]: I0312 21:16:06.990442 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:16:07.000956 master-0 kubenswrapper[31456]: I0312 21:16:07.000253 31456 scope.go:117] "RemoveContainer" containerID="a94f9e91adee74d6313ee6b5492bf9a1186acae682e549d2e594a4cf90cc1041" Mar 12 21:16:07.075250 master-0 kubenswrapper[31456]: I0312 21:16:07.075177 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075250 master-0 kubenswrapper[31456]: I0312 21:16:07.075251 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45bada68-53fe-4807-b923-e12f2a471870-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075300 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45bada68-53fe-4807-b923-e12f2a471870-config-out\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075335 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075361 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-config-volume\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075402 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sllw7\" (UniqueName: \"kubernetes.io/projected/45bada68-53fe-4807-b923-e12f2a471870-kube-api-access-sllw7\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075422 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075451 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075473 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/45bada68-53fe-4807-b923-e12f2a471870-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075501 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45bada68-53fe-4807-b923-e12f2a471870-tls-assets\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075523 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bada68-53fe-4807-b923-e12f2a471870-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.075549 master-0 kubenswrapper[31456]: I0312 21:16:07.075557 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-web-config\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.078052 master-0 kubenswrapper[31456]: I0312 21:16:07.076416 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/45bada68-53fe-4807-b923-e12f2a471870-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.078052 master-0 kubenswrapper[31456]: I0312 21:16:07.077685 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45bada68-53fe-4807-b923-e12f2a471870-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.078641 master-0 kubenswrapper[31456]: I0312 21:16:07.078578 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45bada68-53fe-4807-b923-e12f2a471870-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.081014 master-0 kubenswrapper[31456]: I0312 21:16:07.080432 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.081014 master-0 kubenswrapper[31456]: I0312 21:16:07.080436 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45bada68-53fe-4807-b923-e12f2a471870-config-out\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.081014 master-0 kubenswrapper[31456]: I0312 21:16:07.080971 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45bada68-53fe-4807-b923-e12f2a471870-tls-assets\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.081370 master-0 kubenswrapper[31456]: I0312 21:16:07.081314 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.082655 master-0 kubenswrapper[31456]: I0312 21:16:07.082605 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.082655 master-0 kubenswrapper[31456]: I0312 21:16:07.082637 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-web-config\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.083501 master-0 kubenswrapper[31456]: I0312 21:16:07.083442 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-config-volume\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.088669 master-0 kubenswrapper[31456]: I0312 21:16:07.088607 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/45bada68-53fe-4807-b923-e12f2a471870-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.096134 master-0 kubenswrapper[31456]: I0312 21:16:07.096083 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sllw7\" (UniqueName: \"kubernetes.io/projected/45bada68-53fe-4807-b923-e12f2a471870-kube-api-access-sllw7\") pod \"alertmanager-main-0\" (UID: \"45bada68-53fe-4807-b923-e12f2a471870\") " pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.163127 master-0 kubenswrapper[31456]: I0312 21:16:07.161207 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:16:07.163127 master-0 kubenswrapper[31456]: I0312 21:16:07.161297 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:16:07.184912 master-0 kubenswrapper[31456]: I0312 21:16:07.184842 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3679eeb-ec01-49e3-9049-faf3f0235ea0" path="/var/lib/kubelet/pods/c3679eeb-ec01-49e3-9049-faf3f0235ea0/volumes" Mar 12 21:16:07.186865 master-0 kubenswrapper[31456]: I0312 21:16:07.186793 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:16:07.270502 master-0 kubenswrapper[31456]: I0312 21:16:07.270398 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 12 21:16:07.737377 master-0 kubenswrapper[31456]: I0312 21:16:07.737309 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 12 21:16:07.823381 master-0 kubenswrapper[31456]: I0312 21:16:07.823275 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"be712557006c73e517d914414c1ad2425bce93a06102580bb7e9caf1a8cee32e"} Mar 12 21:16:07.830086 master-0 kubenswrapper[31456]: I0312 21:16:07.829544 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:16:08.835944 master-0 kubenswrapper[31456]: I0312 21:16:08.835869 31456 generic.go:334] "Generic (PLEG): container finished" podID="45bada68-53fe-4807-b923-e12f2a471870" containerID="33f44899823105aafb4105dfe2f7ec6a46430aada9291628ca96fef37e488af9" exitCode=0 Mar 12 21:16:08.838224 master-0 kubenswrapper[31456]: I0312 21:16:08.838165 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerDied","Data":"33f44899823105aafb4105dfe2f7ec6a46430aada9291628ca96fef37e488af9"} Mar 12 21:16:08.845843 master-0 kubenswrapper[31456]: I0312 21:16:08.842832 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8c575f57b-cfn7b"] Mar 12 21:16:08.845843 master-0 kubenswrapper[31456]: I0312 21:16:08.843683 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:08.865178 master-0 kubenswrapper[31456]: I0312 21:16:08.865106 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c575f57b-cfn7b"] Mar 12 21:16:09.012247 master-0 kubenswrapper[31456]: I0312 21:16:09.012172 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-service-ca\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.012490 master-0 kubenswrapper[31456]: I0312 21:16:09.012284 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-oauth-config\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.012490 master-0 kubenswrapper[31456]: I0312 21:16:09.012445 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpfst\" (UniqueName: \"kubernetes.io/projected/e78ecfdd-d8f5-4164-8300-05df372d0c8c-kube-api-access-cpfst\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.012591 master-0 kubenswrapper[31456]: I0312 21:16:09.012502 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-config\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.012591 master-0 kubenswrapper[31456]: I0312 21:16:09.012554 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-oauth-serving-cert\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.012783 master-0 kubenswrapper[31456]: I0312 21:16:09.012680 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-trusted-ca-bundle\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.013959 master-0 kubenswrapper[31456]: I0312 21:16:09.013914 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-serving-cert\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.046393 master-0 kubenswrapper[31456]: I0312 21:16:09.046323 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:16:09.046837 master-0 kubenswrapper[31456]: I0312 21:16:09.046765 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="prometheus" containerID="cri-o://7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" gracePeriod=600 Mar 12 21:16:09.046918 master-0 kubenswrapper[31456]: I0312 21:16:09.046857 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy" containerID="cri-o://e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" gracePeriod=600 Mar 12 21:16:09.046997 master-0 kubenswrapper[31456]: I0312 21:16:09.046895 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="thanos-sidecar" containerID="cri-o://bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" gracePeriod=600 Mar 12 21:16:09.047053 master-0 kubenswrapper[31456]: I0312 21:16:09.047001 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-web" containerID="cri-o://e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" gracePeriod=600 Mar 12 21:16:09.047105 master-0 kubenswrapper[31456]: I0312 21:16:09.046936 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="config-reloader" containerID="cri-o://eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" gracePeriod=600 Mar 12 21:16:09.047329 master-0 kubenswrapper[31456]: I0312 21:16:09.047204 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-thanos" containerID="cri-o://4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" gracePeriod=600 Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115379 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-oauth-serving-cert\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115449 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-trusted-ca-bundle\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115484 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-serving-cert\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115515 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-service-ca\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115548 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-oauth-config\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115577 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpfst\" (UniqueName: \"kubernetes.io/projected/e78ecfdd-d8f5-4164-8300-05df372d0c8c-kube-api-access-cpfst\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116095 master-0 kubenswrapper[31456]: I0312 21:16:09.115600 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-config\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.116624 master-0 kubenswrapper[31456]: I0312 21:16:09.116506 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-config\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.119001 master-0 kubenswrapper[31456]: I0312 21:16:09.117299 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-service-ca\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.119001 master-0 kubenswrapper[31456]: I0312 21:16:09.117979 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-oauth-serving-cert\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.119001 master-0 kubenswrapper[31456]: I0312 21:16:09.118852 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-trusted-ca-bundle\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.130690 master-0 kubenswrapper[31456]: I0312 21:16:09.130356 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-oauth-config\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.140834 master-0 kubenswrapper[31456]: I0312 21:16:09.137632 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-serving-cert\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.158834 master-0 kubenswrapper[31456]: I0312 21:16:09.154863 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpfst\" (UniqueName: \"kubernetes.io/projected/e78ecfdd-d8f5-4164-8300-05df372d0c8c-kube-api-access-cpfst\") pod \"console-8c575f57b-cfn7b\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.290972 master-0 kubenswrapper[31456]: I0312 21:16:09.290852 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:09.774855 master-0 kubenswrapper[31456]: I0312 21:16:09.774826 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:09.786029 master-0 kubenswrapper[31456]: I0312 21:16:09.785958 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8c575f57b-cfn7b"] Mar 12 21:16:09.857363 master-0 kubenswrapper[31456]: I0312 21:16:09.857300 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"5fb1b9dca38f3fd149e91092ebe4d2fdfd506d0d40eed339ab1136d46b4bb698"} Mar 12 21:16:09.857363 master-0 kubenswrapper[31456]: I0312 21:16:09.857344 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"49294a76bdaad4dcd9554b51abb6853a355933dbd2595d2e9282efde864763ea"} Mar 12 21:16:09.857363 master-0 kubenswrapper[31456]: I0312 21:16:09.857353 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"fb97a64cee50a5328bbaec6a84c9e959007ca412f92ef8eed2b1d77c0b970c33"} Mar 12 21:16:09.857363 master-0 kubenswrapper[31456]: I0312 21:16:09.857363 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"7f27ccf7895c177692d9286e681e25e10b81984bd60dd355f663dc19d2d3709f"} Mar 12 21:16:09.857363 master-0 kubenswrapper[31456]: I0312 21:16:09.857371 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"779cb841c61ee9fbd5d903765320cfbfc69a0e4ada38896452a94d6dde10c64c"} Mar 12 21:16:09.861862 master-0 kubenswrapper[31456]: I0312 21:16:09.861822 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" exitCode=0 Mar 12 21:16:09.861862 master-0 kubenswrapper[31456]: I0312 21:16:09.861858 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" exitCode=0 Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861868 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" exitCode=0 Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861873 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861889 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861918 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861929 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861967 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861966 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.861876 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" exitCode=0 Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.862070 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" exitCode=0 Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.862080 31456 generic.go:334] "Generic (PLEG): container finished" podID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" exitCode=0 Mar 12 21:16:09.862100 master-0 kubenswrapper[31456]: I0312 21:16:09.862108 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} Mar 12 21:16:09.863136 master-0 kubenswrapper[31456]: I0312 21:16:09.862121 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} Mar 12 21:16:09.863136 master-0 kubenswrapper[31456]: I0312 21:16:09.862131 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"4d4f359e-9494-4501-9a3d-9be8ef5b46a3","Type":"ContainerDied","Data":"805c5ce472b8ebbbff3055f2cefbf409beee3cad096e80242ec45b3f935c5084"} Mar 12 21:16:09.863261 master-0 kubenswrapper[31456]: I0312 21:16:09.863152 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c575f57b-cfn7b" event={"ID":"e78ecfdd-d8f5-4164-8300-05df372d0c8c","Type":"ContainerStarted","Data":"52dff62e330ed7c65cfc4102f1d2afdcf62202c011b570428629a6c3e938b8f5"} Mar 12 21:16:09.910563 master-0 kubenswrapper[31456]: I0312 21:16:09.910523 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:09.937712 master-0 kubenswrapper[31456]: I0312 21:16:09.937658 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.937712 master-0 kubenswrapper[31456]: I0312 21:16:09.937702 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938063 master-0 kubenswrapper[31456]: I0312 21:16:09.937753 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-serving-certs-ca-bundle\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938063 master-0 kubenswrapper[31456]: I0312 21:16:09.937921 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-web-config\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938063 master-0 kubenswrapper[31456]: I0312 21:16:09.938021 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt78m\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-kube-api-access-rt78m\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938063 master-0 kubenswrapper[31456]: I0312 21:16:09.938058 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-trusted-ca-bundle\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938401 master-0 kubenswrapper[31456]: I0312 21:16:09.938087 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-grpc-tls\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938401 master-0 kubenswrapper[31456]: I0312 21:16:09.938267 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:09.938532 master-0 kubenswrapper[31456]: I0312 21:16:09.938457 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-db\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938600 master-0 kubenswrapper[31456]: I0312 21:16:09.938526 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:09.938600 master-0 kubenswrapper[31456]: I0312 21:16:09.938553 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config-out\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938600 master-0 kubenswrapper[31456]: I0312 21:16:09.938578 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-tls-assets\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938781 master-0 kubenswrapper[31456]: I0312 21:16:09.938617 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-kube-rbac-proxy\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938781 master-0 kubenswrapper[31456]: I0312 21:16:09.938650 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-metrics-client-certs\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938781 master-0 kubenswrapper[31456]: I0312 21:16:09.938672 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-metrics-client-ca\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938781 master-0 kubenswrapper[31456]: I0312 21:16:09.938702 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-rulefiles-0\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938781 master-0 kubenswrapper[31456]: I0312 21:16:09.938726 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-tls\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.938781 master-0 kubenswrapper[31456]: I0312 21:16:09.938772 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-kubelet-serving-ca-bundle\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.939171 master-0 kubenswrapper[31456]: I0312 21:16:09.938795 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-thanos-prometheus-http-client-file\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.939171 master-0 kubenswrapper[31456]: I0312 21:16:09.938844 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config\") pod \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\" (UID: \"4d4f359e-9494-4501-9a3d-9be8ef5b46a3\") " Mar 12 21:16:09.939318 master-0 kubenswrapper[31456]: I0312 21:16:09.939288 31456 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:09.939401 master-0 kubenswrapper[31456]: I0312 21:16:09.939340 31456 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:09.942949 master-0 kubenswrapper[31456]: I0312 21:16:09.942870 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.943076 master-0 kubenswrapper[31456]: I0312 21:16:09.942982 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-kube-api-access-rt78m" (OuterVolumeSpecName: "kube-api-access-rt78m") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "kube-api-access-rt78m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:16:09.943318 master-0 kubenswrapper[31456]: I0312 21:16:09.943285 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:09.943318 master-0 kubenswrapper[31456]: I0312 21:16:09.943283 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:09.943561 master-0 kubenswrapper[31456]: I0312 21:16:09.943509 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:09.943872 master-0 kubenswrapper[31456]: I0312 21:16:09.943784 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:16:09.943961 master-0 kubenswrapper[31456]: I0312 21:16:09.943906 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.944364 master-0 kubenswrapper[31456]: I0312 21:16:09.944330 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.944708 master-0 kubenswrapper[31456]: I0312 21:16:09.944654 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.944860 master-0 kubenswrapper[31456]: I0312 21:16:09.944792 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config" (OuterVolumeSpecName: "config") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.945464 master-0 kubenswrapper[31456]: I0312 21:16:09.945417 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.945574 master-0 kubenswrapper[31456]: I0312 21:16:09.945543 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config-out" (OuterVolumeSpecName: "config-out") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:16:09.946039 master-0 kubenswrapper[31456]: I0312 21:16:09.945993 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:09.947485 master-0 kubenswrapper[31456]: I0312 21:16:09.947381 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.956410 master-0 kubenswrapper[31456]: I0312 21:16:09.956359 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:09.956553 master-0 kubenswrapper[31456]: I0312 21:16:09.956494 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:16:09.992926 master-0 kubenswrapper[31456]: I0312 21:16:09.992853 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-web-config" (OuterVolumeSpecName: "web-config") pod "4d4f359e-9494-4501-9a3d-9be8ef5b46a3" (UID: "4d4f359e-9494-4501-9a3d-9be8ef5b46a3"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:10.027313 master-0 kubenswrapper[31456]: I0312 21:16:10.027245 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.041181 master-0 kubenswrapper[31456]: I0312 21:16:10.041096 31456 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041182 31456 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041205 31456 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041224 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041242 31456 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041265 31456 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041298 31456 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-web-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041317 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt78m\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-kube-api-access-rt78m\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041334 31456 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.041349 master-0 kubenswrapper[31456]: I0312 21:16:10.041351 31456 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.042523 master-0 kubenswrapper[31456]: I0312 21:16:10.041367 31456 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-config-out\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.042523 master-0 kubenswrapper[31456]: I0312 21:16:10.041383 31456 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.042523 master-0 kubenswrapper[31456]: I0312 21:16:10.041400 31456 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.042523 master-0 kubenswrapper[31456]: I0312 21:16:10.041419 31456 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.042523 master-0 kubenswrapper[31456]: I0312 21:16:10.041437 31456 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.042523 master-0 kubenswrapper[31456]: I0312 21:16:10.041472 31456 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d4f359e-9494-4501-9a3d-9be8ef5b46a3-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:10.056703 master-0 kubenswrapper[31456]: I0312 21:16:10.056566 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.083041 master-0 kubenswrapper[31456]: I0312 21:16:10.082047 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.100477 master-0 kubenswrapper[31456]: I0312 21:16:10.100414 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.121802 master-0 kubenswrapper[31456]: I0312 21:16:10.118206 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.121802 master-0 kubenswrapper[31456]: E0312 21:16:10.121675 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.121802 master-0 kubenswrapper[31456]: I0312 21:16:10.121757 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} err="failed to get container status \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" Mar 12 21:16:10.121802 master-0 kubenswrapper[31456]: I0312 21:16:10.121796 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.122659 master-0 kubenswrapper[31456]: E0312 21:16:10.122599 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.122659 master-0 kubenswrapper[31456]: I0312 21:16:10.122647 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} err="failed to get container status \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" Mar 12 21:16:10.122791 master-0 kubenswrapper[31456]: I0312 21:16:10.122673 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.123861 master-0 kubenswrapper[31456]: E0312 21:16:10.123126 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.123861 master-0 kubenswrapper[31456]: I0312 21:16:10.123156 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} err="failed to get container status \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" Mar 12 21:16:10.123861 master-0 kubenswrapper[31456]: I0312 21:16:10.123172 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.123861 master-0 kubenswrapper[31456]: E0312 21:16:10.123426 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.123861 master-0 kubenswrapper[31456]: I0312 21:16:10.123458 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} err="failed to get container status \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" Mar 12 21:16:10.123861 master-0 kubenswrapper[31456]: I0312 21:16:10.123474 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.124149 master-0 kubenswrapper[31456]: E0312 21:16:10.123941 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.124149 master-0 kubenswrapper[31456]: I0312 21:16:10.123968 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} err="failed to get container status \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" Mar 12 21:16:10.124149 master-0 kubenswrapper[31456]: I0312 21:16:10.123983 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.124358 master-0 kubenswrapper[31456]: E0312 21:16:10.124334 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.124358 master-0 kubenswrapper[31456]: I0312 21:16:10.124353 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} err="failed to get container status \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" Mar 12 21:16:10.124358 master-0 kubenswrapper[31456]: I0312 21:16:10.124365 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.124947 master-0 kubenswrapper[31456]: E0312 21:16:10.124740 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.124947 master-0 kubenswrapper[31456]: I0312 21:16:10.124791 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} err="failed to get container status \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" Mar 12 21:16:10.124947 master-0 kubenswrapper[31456]: I0312 21:16:10.124852 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.125523 master-0 kubenswrapper[31456]: I0312 21:16:10.125463 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} err="failed to get container status \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" Mar 12 21:16:10.125523 master-0 kubenswrapper[31456]: I0312 21:16:10.125483 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.125886 master-0 kubenswrapper[31456]: I0312 21:16:10.125782 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} err="failed to get container status \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" Mar 12 21:16:10.125886 master-0 kubenswrapper[31456]: I0312 21:16:10.125825 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.126117 master-0 kubenswrapper[31456]: I0312 21:16:10.126035 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} err="failed to get container status \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" Mar 12 21:16:10.126117 master-0 kubenswrapper[31456]: I0312 21:16:10.126051 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.126275 master-0 kubenswrapper[31456]: I0312 21:16:10.126232 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} err="failed to get container status \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" Mar 12 21:16:10.126275 master-0 kubenswrapper[31456]: I0312 21:16:10.126250 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.127053 master-0 kubenswrapper[31456]: I0312 21:16:10.126665 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} err="failed to get container status \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" Mar 12 21:16:10.127053 master-0 kubenswrapper[31456]: I0312 21:16:10.126682 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.127053 master-0 kubenswrapper[31456]: I0312 21:16:10.126947 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} err="failed to get container status \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" Mar 12 21:16:10.127053 master-0 kubenswrapper[31456]: I0312 21:16:10.126973 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.127536 master-0 kubenswrapper[31456]: I0312 21:16:10.127278 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} err="failed to get container status \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" Mar 12 21:16:10.127536 master-0 kubenswrapper[31456]: I0312 21:16:10.127305 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.127725 master-0 kubenswrapper[31456]: I0312 21:16:10.127700 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} err="failed to get container status \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" Mar 12 21:16:10.127725 master-0 kubenswrapper[31456]: I0312 21:16:10.127720 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.128155 master-0 kubenswrapper[31456]: I0312 21:16:10.128109 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} err="failed to get container status \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" Mar 12 21:16:10.128155 master-0 kubenswrapper[31456]: I0312 21:16:10.128152 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.128501 master-0 kubenswrapper[31456]: I0312 21:16:10.128393 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} err="failed to get container status \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" Mar 12 21:16:10.128501 master-0 kubenswrapper[31456]: I0312 21:16:10.128416 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.128863 master-0 kubenswrapper[31456]: I0312 21:16:10.128776 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} err="failed to get container status \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" Mar 12 21:16:10.128863 master-0 kubenswrapper[31456]: I0312 21:16:10.128858 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.129307 master-0 kubenswrapper[31456]: I0312 21:16:10.129270 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} err="failed to get container status \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" Mar 12 21:16:10.129307 master-0 kubenswrapper[31456]: I0312 21:16:10.129299 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.129618 master-0 kubenswrapper[31456]: I0312 21:16:10.129577 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} err="failed to get container status \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" Mar 12 21:16:10.129618 master-0 kubenswrapper[31456]: I0312 21:16:10.129618 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.130883 master-0 kubenswrapper[31456]: I0312 21:16:10.130848 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} err="failed to get container status \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" Mar 12 21:16:10.130883 master-0 kubenswrapper[31456]: I0312 21:16:10.130880 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.131159 master-0 kubenswrapper[31456]: I0312 21:16:10.131131 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} err="failed to get container status \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" Mar 12 21:16:10.131232 master-0 kubenswrapper[31456]: I0312 21:16:10.131175 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.131494 master-0 kubenswrapper[31456]: I0312 21:16:10.131441 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} err="failed to get container status \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" Mar 12 21:16:10.131494 master-0 kubenswrapper[31456]: I0312 21:16:10.131472 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.132117 master-0 kubenswrapper[31456]: I0312 21:16:10.132082 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} err="failed to get container status \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" Mar 12 21:16:10.132117 master-0 kubenswrapper[31456]: I0312 21:16:10.132113 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.133329 master-0 kubenswrapper[31456]: I0312 21:16:10.133260 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} err="failed to get container status \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" Mar 12 21:16:10.133329 master-0 kubenswrapper[31456]: I0312 21:16:10.133312 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.134095 master-0 kubenswrapper[31456]: I0312 21:16:10.134049 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} err="failed to get container status \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" Mar 12 21:16:10.134095 master-0 kubenswrapper[31456]: I0312 21:16:10.134087 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.134385 master-0 kubenswrapper[31456]: I0312 21:16:10.134346 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} err="failed to get container status \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" Mar 12 21:16:10.134385 master-0 kubenswrapper[31456]: I0312 21:16:10.134376 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.134877 master-0 kubenswrapper[31456]: I0312 21:16:10.134840 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} err="failed to get container status \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" Mar 12 21:16:10.134877 master-0 kubenswrapper[31456]: I0312 21:16:10.134868 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.135119 master-0 kubenswrapper[31456]: I0312 21:16:10.135090 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} err="failed to get container status \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" Mar 12 21:16:10.135119 master-0 kubenswrapper[31456]: I0312 21:16:10.135115 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.135661 master-0 kubenswrapper[31456]: I0312 21:16:10.135464 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} err="failed to get container status \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" Mar 12 21:16:10.135661 master-0 kubenswrapper[31456]: I0312 21:16:10.135490 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.135771 master-0 kubenswrapper[31456]: I0312 21:16:10.135723 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} err="failed to get container status \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" Mar 12 21:16:10.135771 master-0 kubenswrapper[31456]: I0312 21:16:10.135741 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.136156 master-0 kubenswrapper[31456]: I0312 21:16:10.136118 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} err="failed to get container status \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" Mar 12 21:16:10.136156 master-0 kubenswrapper[31456]: I0312 21:16:10.136142 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.136876 master-0 kubenswrapper[31456]: I0312 21:16:10.136347 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} err="failed to get container status \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" Mar 12 21:16:10.136876 master-0 kubenswrapper[31456]: I0312 21:16:10.136380 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.136876 master-0 kubenswrapper[31456]: I0312 21:16:10.136615 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} err="failed to get container status \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" Mar 12 21:16:10.136876 master-0 kubenswrapper[31456]: I0312 21:16:10.136633 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.137267 master-0 kubenswrapper[31456]: I0312 21:16:10.136990 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} err="failed to get container status \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" Mar 12 21:16:10.137267 master-0 kubenswrapper[31456]: I0312 21:16:10.137008 31456 scope.go:117] "RemoveContainer" containerID="4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210" Mar 12 21:16:10.137426 master-0 kubenswrapper[31456]: I0312 21:16:10.137268 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210"} err="failed to get container status \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": rpc error: code = NotFound desc = could not find container \"4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210\": container with ID starting with 4a8f8ed0b87167c3f7ee45a0645c5840ce66fd68251e9e795bc7f5d8e74d0210 not found: ID does not exist" Mar 12 21:16:10.137426 master-0 kubenswrapper[31456]: I0312 21:16:10.137292 31456 scope.go:117] "RemoveContainer" containerID="e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed" Mar 12 21:16:10.137595 master-0 kubenswrapper[31456]: I0312 21:16:10.137565 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed"} err="failed to get container status \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": rpc error: code = NotFound desc = could not find container \"e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed\": container with ID starting with e28d96b40b710a5b295c7043b9acd9fba14e34b155a9e58890007ad52660e7ed not found: ID does not exist" Mar 12 21:16:10.137595 master-0 kubenswrapper[31456]: I0312 21:16:10.137592 31456 scope.go:117] "RemoveContainer" containerID="e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7" Mar 12 21:16:10.137882 master-0 kubenswrapper[31456]: I0312 21:16:10.137835 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7"} err="failed to get container status \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": rpc error: code = NotFound desc = could not find container \"e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7\": container with ID starting with e197fb1aba2b4adce182ba442b1adb7210e4e9529e38e893db4834c429a0f4f7 not found: ID does not exist" Mar 12 21:16:10.137882 master-0 kubenswrapper[31456]: I0312 21:16:10.137861 31456 scope.go:117] "RemoveContainer" containerID="bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d" Mar 12 21:16:10.138206 master-0 kubenswrapper[31456]: I0312 21:16:10.138173 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d"} err="failed to get container status \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": rpc error: code = NotFound desc = could not find container \"bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d\": container with ID starting with bfd596886f14648f1a19598576514f5cf9dd478fb8f9db8a1e9d68be91bac84d not found: ID does not exist" Mar 12 21:16:10.138206 master-0 kubenswrapper[31456]: I0312 21:16:10.138194 31456 scope.go:117] "RemoveContainer" containerID="eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583" Mar 12 21:16:10.138534 master-0 kubenswrapper[31456]: I0312 21:16:10.138505 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583"} err="failed to get container status \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": rpc error: code = NotFound desc = could not find container \"eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583\": container with ID starting with eecef6e857214c00dc7fb151b77968cf20c8c1af50e25d3cd7fef7caa618a583 not found: ID does not exist" Mar 12 21:16:10.138534 master-0 kubenswrapper[31456]: I0312 21:16:10.138524 31456 scope.go:117] "RemoveContainer" containerID="7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61" Mar 12 21:16:10.138837 master-0 kubenswrapper[31456]: I0312 21:16:10.138793 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61"} err="failed to get container status \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": rpc error: code = NotFound desc = could not find container \"7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61\": container with ID starting with 7e3f5d0d83d84c4666f16fe6b1b1e620d294b66849ea8509ba685d50117aaa61 not found: ID does not exist" Mar 12 21:16:10.138892 master-0 kubenswrapper[31456]: I0312 21:16:10.138834 31456 scope.go:117] "RemoveContainer" containerID="b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3" Mar 12 21:16:10.139202 master-0 kubenswrapper[31456]: I0312 21:16:10.139171 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3"} err="failed to get container status \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": rpc error: code = NotFound desc = could not find container \"b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3\": container with ID starting with b744004305210d6cf91b56c4695c6669956350074d5a599f4f9f046b761ae2d3 not found: ID does not exist" Mar 12 21:16:10.199827 master-0 kubenswrapper[31456]: I0312 21:16:10.199728 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:16:10.215910 master-0 kubenswrapper[31456]: I0312 21:16:10.215354 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:16:10.238355 master-0 kubenswrapper[31456]: I0312 21:16:10.238289 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:16:10.238596 master-0 kubenswrapper[31456]: E0312 21:16:10.238547 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="init-config-reloader" Mar 12 21:16:10.238596 master-0 kubenswrapper[31456]: I0312 21:16:10.238559 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="init-config-reloader" Mar 12 21:16:10.238596 master-0 kubenswrapper[31456]: E0312 21:16:10.238578 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy" Mar 12 21:16:10.238596 master-0 kubenswrapper[31456]: I0312 21:16:10.238584 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy" Mar 12 21:16:10.238596 master-0 kubenswrapper[31456]: E0312 21:16:10.238600 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="prometheus" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238606 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="prometheus" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: E0312 21:16:10.238619 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="thanos-sidecar" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238625 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="thanos-sidecar" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: E0312 21:16:10.238637 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-web" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238643 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-web" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: E0312 21:16:10.238660 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-thanos" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238667 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-thanos" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: E0312 21:16:10.238678 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="config-reloader" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238683 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="config-reloader" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238800 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-thanos" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238828 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238846 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="thanos-sidecar" Mar 12 21:16:10.238862 master-0 kubenswrapper[31456]: I0312 21:16:10.238870 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="prometheus" Mar 12 21:16:10.239332 master-0 kubenswrapper[31456]: I0312 21:16:10.238886 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="config-reloader" Mar 12 21:16:10.239332 master-0 kubenswrapper[31456]: I0312 21:16:10.238896 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" containerName="kube-rbac-proxy-web" Mar 12 21:16:10.242832 master-0 kubenswrapper[31456]: I0312 21:16:10.240664 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.245300 master-0 kubenswrapper[31456]: I0312 21:16:10.245253 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 12 21:16:10.245458 master-0 kubenswrapper[31456]: I0312 21:16:10.245295 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 12 21:16:10.245556 master-0 kubenswrapper[31456]: I0312 21:16:10.245509 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 12 21:16:10.245730 master-0 kubenswrapper[31456]: I0312 21:16:10.245526 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 12 21:16:10.245730 master-0 kubenswrapper[31456]: I0312 21:16:10.245669 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 12 21:16:10.245870 master-0 kubenswrapper[31456]: I0312 21:16:10.245785 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 12 21:16:10.245927 master-0 kubenswrapper[31456]: I0312 21:16:10.245886 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 12 21:16:10.246421 master-0 kubenswrapper[31456]: I0312 21:16:10.246334 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 12 21:16:10.246672 master-0 kubenswrapper[31456]: I0312 21:16:10.246618 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fvjb30sfen171" Mar 12 21:16:10.246952 master-0 kubenswrapper[31456]: I0312 21:16:10.246894 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 12 21:16:10.258468 master-0 kubenswrapper[31456]: I0312 21:16:10.258388 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 12 21:16:10.260197 master-0 kubenswrapper[31456]: I0312 21:16:10.259369 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 12 21:16:10.277017 master-0 kubenswrapper[31456]: I0312 21:16:10.269086 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:16:10.345213 master-0 kubenswrapper[31456]: I0312 21:16:10.345150 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345213 master-0 kubenswrapper[31456]: I0312 21:16:10.345202 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/482b0d39-54bd-4f16-8e09-4adebabbcddf-config-out\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345263 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/482b0d39-54bd-4f16-8e09-4adebabbcddf-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345284 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345336 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345354 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-web-config\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345392 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345417 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-config\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345467 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345474 master-0 kubenswrapper[31456]: I0312 21:16:10.345483 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345851 master-0 kubenswrapper[31456]: I0312 21:16:10.345503 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345851 master-0 kubenswrapper[31456]: I0312 21:16:10.345520 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2l79\" (UniqueName: \"kubernetes.io/projected/482b0d39-54bd-4f16-8e09-4adebabbcddf-kube-api-access-l2l79\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345851 master-0 kubenswrapper[31456]: I0312 21:16:10.345643 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345851 master-0 kubenswrapper[31456]: I0312 21:16:10.345726 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345851 master-0 kubenswrapper[31456]: I0312 21:16:10.345779 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.345851 master-0 kubenswrapper[31456]: I0312 21:16:10.345852 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.346139 master-0 kubenswrapper[31456]: I0312 21:16:10.345872 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.346139 master-0 kubenswrapper[31456]: I0312 21:16:10.345949 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.447907 master-0 kubenswrapper[31456]: I0312 21:16:10.447778 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448112 master-0 kubenswrapper[31456]: I0312 21:16:10.448034 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448207 master-0 kubenswrapper[31456]: I0312 21:16:10.448165 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-web-config\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448265 master-0 kubenswrapper[31456]: I0312 21:16:10.448246 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448377 master-0 kubenswrapper[31456]: I0312 21:16:10.448339 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-config\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448544 master-0 kubenswrapper[31456]: I0312 21:16:10.448503 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448601 master-0 kubenswrapper[31456]: I0312 21:16:10.448546 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448942 master-0 kubenswrapper[31456]: I0312 21:16:10.448868 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448942 master-0 kubenswrapper[31456]: I0312 21:16:10.448891 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.448942 master-0 kubenswrapper[31456]: I0312 21:16:10.448937 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2l79\" (UniqueName: \"kubernetes.io/projected/482b0d39-54bd-4f16-8e09-4adebabbcddf-kube-api-access-l2l79\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449202 master-0 kubenswrapper[31456]: I0312 21:16:10.448945 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449202 master-0 kubenswrapper[31456]: I0312 21:16:10.448970 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449202 master-0 kubenswrapper[31456]: I0312 21:16:10.449105 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449202 master-0 kubenswrapper[31456]: I0312 21:16:10.449146 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449403 master-0 kubenswrapper[31456]: I0312 21:16:10.449229 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449403 master-0 kubenswrapper[31456]: I0312 21:16:10.449260 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449403 master-0 kubenswrapper[31456]: I0312 21:16:10.449329 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449403 master-0 kubenswrapper[31456]: I0312 21:16:10.449360 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449583 master-0 kubenswrapper[31456]: I0312 21:16:10.449421 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449583 master-0 kubenswrapper[31456]: I0312 21:16:10.449472 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/482b0d39-54bd-4f16-8e09-4adebabbcddf-config-out\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.449583 master-0 kubenswrapper[31456]: I0312 21:16:10.449505 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/482b0d39-54bd-4f16-8e09-4adebabbcddf-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.450111 master-0 kubenswrapper[31456]: I0312 21:16:10.450075 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.451385 master-0 kubenswrapper[31456]: I0312 21:16:10.451348 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.451943 master-0 kubenswrapper[31456]: I0312 21:16:10.451914 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-web-config\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.452166 master-0 kubenswrapper[31456]: I0312 21:16:10.452128 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.452229 master-0 kubenswrapper[31456]: I0312 21:16:10.452184 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.452537 master-0 kubenswrapper[31456]: I0312 21:16:10.452503 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-config\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.453467 master-0 kubenswrapper[31456]: I0312 21:16:10.453435 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/482b0d39-54bd-4f16-8e09-4adebabbcddf-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.453973 master-0 kubenswrapper[31456]: I0312 21:16:10.453927 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.454708 master-0 kubenswrapper[31456]: I0312 21:16:10.454488 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.454989 master-0 kubenswrapper[31456]: I0312 21:16:10.454956 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.455331 master-0 kubenswrapper[31456]: I0312 21:16:10.455294 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.455331 master-0 kubenswrapper[31456]: I0312 21:16:10.455319 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/482b0d39-54bd-4f16-8e09-4adebabbcddf-config-out\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.457492 master-0 kubenswrapper[31456]: I0312 21:16:10.457457 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/482b0d39-54bd-4f16-8e09-4adebabbcddf-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.460580 master-0 kubenswrapper[31456]: I0312 21:16:10.460514 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/482b0d39-54bd-4f16-8e09-4adebabbcddf-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.470165 master-0 kubenswrapper[31456]: I0312 21:16:10.470111 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2l79\" (UniqueName: \"kubernetes.io/projected/482b0d39-54bd-4f16-8e09-4adebabbcddf-kube-api-access-l2l79\") pod \"prometheus-k8s-0\" (UID: \"482b0d39-54bd-4f16-8e09-4adebabbcddf\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.580835 master-0 kubenswrapper[31456]: I0312 21:16:10.580734 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:10.877582 master-0 kubenswrapper[31456]: I0312 21:16:10.876557 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"45bada68-53fe-4807-b923-e12f2a471870","Type":"ContainerStarted","Data":"fea605155c0603a9d7ef2efd3f7cbaf0c5ba1676dc84b13264732d5fde080746"} Mar 12 21:16:10.881645 master-0 kubenswrapper[31456]: I0312 21:16:10.881519 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c575f57b-cfn7b" event={"ID":"e78ecfdd-d8f5-4164-8300-05df372d0c8c","Type":"ContainerStarted","Data":"f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69"} Mar 12 21:16:10.932068 master-0 kubenswrapper[31456]: I0312 21:16:10.931951 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.931927821 podStartE2EDuration="4.931927821s" podCreationTimestamp="2026-03-12 21:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:16:10.923619358 +0000 UTC m=+431.998224746" watchObservedRunningTime="2026-03-12 21:16:10.931927821 +0000 UTC m=+432.006533159" Mar 12 21:16:10.982003 master-0 kubenswrapper[31456]: I0312 21:16:10.981857 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8c575f57b-cfn7b" podStartSLOduration=2.981792168 podStartE2EDuration="2.981792168s" podCreationTimestamp="2026-03-12 21:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:16:10.968492783 +0000 UTC m=+432.043098161" watchObservedRunningTime="2026-03-12 21:16:10.981792168 +0000 UTC m=+432.056397536" Mar 12 21:16:11.061621 master-0 kubenswrapper[31456]: I0312 21:16:11.061424 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 12 21:16:11.069068 master-0 kubenswrapper[31456]: W0312 21:16:11.067836 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod482b0d39_54bd_4f16_8e09_4adebabbcddf.slice/crio-3c0407e7b049cb4df94f913b6169020e11db8b70241702bc3bfb21369aa00f2e WatchSource:0}: Error finding container 3c0407e7b049cb4df94f913b6169020e11db8b70241702bc3bfb21369aa00f2e: Status 404 returned error can't find the container with id 3c0407e7b049cb4df94f913b6169020e11db8b70241702bc3bfb21369aa00f2e Mar 12 21:16:11.206752 master-0 kubenswrapper[31456]: I0312 21:16:11.206083 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d4f359e-9494-4501-9a3d-9be8ef5b46a3" path="/var/lib/kubelet/pods/4d4f359e-9494-4501-9a3d-9be8ef5b46a3/volumes" Mar 12 21:16:11.896799 master-0 kubenswrapper[31456]: I0312 21:16:11.896655 31456 generic.go:334] "Generic (PLEG): container finished" podID="482b0d39-54bd-4f16-8e09-4adebabbcddf" containerID="247c0f316b61876bbeb282a0f85b4b1ca0c57ba5fe7e2aed9aa6e9c33c48da27" exitCode=0 Mar 12 21:16:11.896799 master-0 kubenswrapper[31456]: I0312 21:16:11.896769 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerDied","Data":"247c0f316b61876bbeb282a0f85b4b1ca0c57ba5fe7e2aed9aa6e9c33c48da27"} Mar 12 21:16:11.897723 master-0 kubenswrapper[31456]: I0312 21:16:11.896884 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"3c0407e7b049cb4df94f913b6169020e11db8b70241702bc3bfb21369aa00f2e"} Mar 12 21:16:12.908579 master-0 kubenswrapper[31456]: I0312 21:16:12.908499 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"13586293961df4c4611e271249b73e0e8df0fe1bf3f321fd8f6166feff61f4fb"} Mar 12 21:16:12.908579 master-0 kubenswrapper[31456]: I0312 21:16:12.908572 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"6d13d442228515a4d5f5f46f46759189b7245798096a4bf0067919b787e91757"} Mar 12 21:16:12.908579 master-0 kubenswrapper[31456]: I0312 21:16:12.908583 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"edc157f3870bb004ef2fc3e57c19701dc769b8a8cff591417a0d82b0a93c13b3"} Mar 12 21:16:12.909184 master-0 kubenswrapper[31456]: I0312 21:16:12.908593 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"54a402cc5032e28be21da2f82b5e361a7cac9c4cb94183eff2843a483df003c1"} Mar 12 21:16:12.909184 master-0 kubenswrapper[31456]: I0312 21:16:12.908603 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"4e253277eac8617e4abd4041026848b14b0580ca1df528d7bb26025cd4a1a1fe"} Mar 12 21:16:12.909184 master-0 kubenswrapper[31456]: I0312 21:16:12.908614 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"482b0d39-54bd-4f16-8e09-4adebabbcddf","Type":"ContainerStarted","Data":"f106b9c04da95d8d136fc47039d591098d8959f294883f32cbd5b4519b3efdeb"} Mar 12 21:16:12.947512 master-0 kubenswrapper[31456]: I0312 21:16:12.947378 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.947360637 podStartE2EDuration="2.947360637s" podCreationTimestamp="2026-03-12 21:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:16:12.941256367 +0000 UTC m=+434.015861725" watchObservedRunningTime="2026-03-12 21:16:12.947360637 +0000 UTC m=+434.021965965" Mar 12 21:16:15.581887 master-0 kubenswrapper[31456]: I0312 21:16:15.581777 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:16:19.291841 master-0 kubenswrapper[31456]: I0312 21:16:19.291668 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:19.291841 master-0 kubenswrapper[31456]: I0312 21:16:19.291779 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:19.300784 master-0 kubenswrapper[31456]: I0312 21:16:19.300680 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:19.982062 master-0 kubenswrapper[31456]: I0312 21:16:19.982005 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:16:20.065919 master-0 kubenswrapper[31456]: I0312 21:16:20.065838 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6fff565898-x9jfv"] Mar 12 21:16:34.106017 master-0 kubenswrapper[31456]: I0312 21:16:34.105631 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c"] Mar 12 21:16:45.129711 master-0 kubenswrapper[31456]: I0312 21:16:45.129610 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6fff565898-x9jfv" podUID="a3fe72db-905f-487a-a343-295bce31e19e" containerName="console" containerID="cri-o://15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda" gracePeriod=15 Mar 12 21:16:45.793889 master-0 kubenswrapper[31456]: I0312 21:16:45.793787 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6fff565898-x9jfv_a3fe72db-905f-487a-a343-295bce31e19e/console/0.log" Mar 12 21:16:45.794371 master-0 kubenswrapper[31456]: I0312 21:16:45.793956 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:16:45.855309 master-0 kubenswrapper[31456]: I0312 21:16:45.855225 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-service-ca\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.855596 master-0 kubenswrapper[31456]: I0312 21:16:45.855350 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-serving-cert\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.855596 master-0 kubenswrapper[31456]: I0312 21:16:45.855438 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-oauth-serving-cert\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.855596 master-0 kubenswrapper[31456]: I0312 21:16:45.855501 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-oauth-config\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.855596 master-0 kubenswrapper[31456]: I0312 21:16:45.855590 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-console-config\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.855903 master-0 kubenswrapper[31456]: I0312 21:16:45.855668 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-trusted-ca-bundle\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.855903 master-0 kubenswrapper[31456]: I0312 21:16:45.855730 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4bw\" (UniqueName: \"kubernetes.io/projected/a3fe72db-905f-487a-a343-295bce31e19e-kube-api-access-5r4bw\") pod \"a3fe72db-905f-487a-a343-295bce31e19e\" (UID: \"a3fe72db-905f-487a-a343-295bce31e19e\") " Mar 12 21:16:45.857178 master-0 kubenswrapper[31456]: I0312 21:16:45.856927 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-service-ca" (OuterVolumeSpecName: "service-ca") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:45.857178 master-0 kubenswrapper[31456]: I0312 21:16:45.857050 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-console-config" (OuterVolumeSpecName: "console-config") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:45.857178 master-0 kubenswrapper[31456]: I0312 21:16:45.857094 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:45.857178 master-0 kubenswrapper[31456]: I0312 21:16:45.857064 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:45.860615 master-0 kubenswrapper[31456]: I0312 21:16:45.860516 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:45.861297 master-0 kubenswrapper[31456]: I0312 21:16:45.861230 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3fe72db-905f-487a-a343-295bce31e19e-kube-api-access-5r4bw" (OuterVolumeSpecName: "kube-api-access-5r4bw") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "kube-api-access-5r4bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:16:45.861982 master-0 kubenswrapper[31456]: I0312 21:16:45.861919 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a3fe72db-905f-487a-a343-295bce31e19e" (UID: "a3fe72db-905f-487a-a343-295bce31e19e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:45.958261 master-0 kubenswrapper[31456]: I0312 21:16:45.958113 31456 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:45.958261 master-0 kubenswrapper[31456]: I0312 21:16:45.958190 31456 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:45.958261 master-0 kubenswrapper[31456]: I0312 21:16:45.958217 31456 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3fe72db-905f-487a-a343-295bce31e19e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:45.958261 master-0 kubenswrapper[31456]: I0312 21:16:45.958243 31456 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:45.958606 master-0 kubenswrapper[31456]: I0312 21:16:45.958269 31456 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:45.958606 master-0 kubenswrapper[31456]: I0312 21:16:45.958296 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4bw\" (UniqueName: \"kubernetes.io/projected/a3fe72db-905f-487a-a343-295bce31e19e-kube-api-access-5r4bw\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:45.958606 master-0 kubenswrapper[31456]: I0312 21:16:45.958322 31456 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3fe72db-905f-487a-a343-295bce31e19e-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:46.255966 master-0 kubenswrapper[31456]: I0312 21:16:46.255789 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6fff565898-x9jfv_a3fe72db-905f-487a-a343-295bce31e19e/console/0.log" Mar 12 21:16:46.255966 master-0 kubenswrapper[31456]: I0312 21:16:46.255911 31456 generic.go:334] "Generic (PLEG): container finished" podID="a3fe72db-905f-487a-a343-295bce31e19e" containerID="15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda" exitCode=2 Mar 12 21:16:46.255966 master-0 kubenswrapper[31456]: I0312 21:16:46.255956 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fff565898-x9jfv" event={"ID":"a3fe72db-905f-487a-a343-295bce31e19e","Type":"ContainerDied","Data":"15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda"} Mar 12 21:16:46.256879 master-0 kubenswrapper[31456]: I0312 21:16:46.256006 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fff565898-x9jfv" event={"ID":"a3fe72db-905f-487a-a343-295bce31e19e","Type":"ContainerDied","Data":"334c23c01184798bb989b60dd7b0e97509ae235a2ccfcebfe031c1912ca4d815"} Mar 12 21:16:46.256879 master-0 kubenswrapper[31456]: I0312 21:16:46.256011 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fff565898-x9jfv" Mar 12 21:16:46.256879 master-0 kubenswrapper[31456]: I0312 21:16:46.256048 31456 scope.go:117] "RemoveContainer" containerID="15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda" Mar 12 21:16:46.286975 master-0 kubenswrapper[31456]: I0312 21:16:46.286883 31456 scope.go:117] "RemoveContainer" containerID="15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda" Mar 12 21:16:46.288006 master-0 kubenswrapper[31456]: E0312 21:16:46.287913 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda\": container with ID starting with 15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda not found: ID does not exist" containerID="15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda" Mar 12 21:16:46.288128 master-0 kubenswrapper[31456]: I0312 21:16:46.287986 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda"} err="failed to get container status \"15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda\": rpc error: code = NotFound desc = could not find container \"15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda\": container with ID starting with 15de1fe9d2f3d2569694e19652b0dd711833523e5f33c37447c506bfd9212bda not found: ID does not exist" Mar 12 21:16:46.325837 master-0 kubenswrapper[31456]: I0312 21:16:46.325731 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6fff565898-x9jfv"] Mar 12 21:16:46.357272 master-0 kubenswrapper[31456]: I0312 21:16:46.357179 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6fff565898-x9jfv"] Mar 12 21:16:47.187157 master-0 kubenswrapper[31456]: I0312 21:16:47.186975 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3fe72db-905f-487a-a343-295bce31e19e" path="/var/lib/kubelet/pods/a3fe72db-905f-487a-a343-295bce31e19e/volumes" Mar 12 21:16:59.153500 master-0 kubenswrapper[31456]: I0312 21:16:59.153328 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" podUID="739ac366-cbaa-4b39-a525-66c54c3802f0" containerName="oauth-openshift" containerID="cri-o://a7dbff18322dcdecfea58aaa7e321fa66b989f291e83524de7729657bb7e5cfa" gracePeriod=15 Mar 12 21:16:59.455298 master-0 kubenswrapper[31456]: I0312 21:16:59.455134 31456 generic.go:334] "Generic (PLEG): container finished" podID="739ac366-cbaa-4b39-a525-66c54c3802f0" containerID="a7dbff18322dcdecfea58aaa7e321fa66b989f291e83524de7729657bb7e5cfa" exitCode=0 Mar 12 21:16:59.455298 master-0 kubenswrapper[31456]: I0312 21:16:59.455203 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" event={"ID":"739ac366-cbaa-4b39-a525-66c54c3802f0","Type":"ContainerDied","Data":"a7dbff18322dcdecfea58aaa7e321fa66b989f291e83524de7729657bb7e5cfa"} Mar 12 21:16:59.744767 master-0 kubenswrapper[31456]: I0312 21:16:59.742535 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:16:59.793853 master-0 kubenswrapper[31456]: I0312 21:16:59.793757 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-99c875859-pv7xb"] Mar 12 21:16:59.794183 master-0 kubenswrapper[31456]: E0312 21:16:59.794143 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fe72db-905f-487a-a343-295bce31e19e" containerName="console" Mar 12 21:16:59.794183 master-0 kubenswrapper[31456]: I0312 21:16:59.794168 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fe72db-905f-487a-a343-295bce31e19e" containerName="console" Mar 12 21:16:59.794311 master-0 kubenswrapper[31456]: E0312 21:16:59.794194 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739ac366-cbaa-4b39-a525-66c54c3802f0" containerName="oauth-openshift" Mar 12 21:16:59.794311 master-0 kubenswrapper[31456]: I0312 21:16:59.794203 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="739ac366-cbaa-4b39-a525-66c54c3802f0" containerName="oauth-openshift" Mar 12 21:16:59.794426 master-0 kubenswrapper[31456]: I0312 21:16:59.794367 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3fe72db-905f-487a-a343-295bce31e19e" containerName="console" Mar 12 21:16:59.794488 master-0 kubenswrapper[31456]: I0312 21:16:59.794451 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="739ac366-cbaa-4b39-a525-66c54c3802f0" containerName="oauth-openshift" Mar 12 21:16:59.795081 master-0 kubenswrapper[31456]: I0312 21:16:59.795039 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.832111 master-0 kubenswrapper[31456]: I0312 21:16:59.832039 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-99c875859-pv7xb"] Mar 12 21:16:59.939838 master-0 kubenswrapper[31456]: I0312 21:16:59.939730 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-serving-cert\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940152 master-0 kubenswrapper[31456]: I0312 21:16:59.939894 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-error\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940152 master-0 kubenswrapper[31456]: I0312 21:16:59.939968 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-login\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940152 master-0 kubenswrapper[31456]: I0312 21:16:59.940084 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnr4t\" (UniqueName: \"kubernetes.io/projected/739ac366-cbaa-4b39-a525-66c54c3802f0-kube-api-access-rnr4t\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940416 master-0 kubenswrapper[31456]: I0312 21:16:59.940176 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-dir\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940416 master-0 kubenswrapper[31456]: I0312 21:16:59.940342 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:16:59.940416 master-0 kubenswrapper[31456]: I0312 21:16:59.940410 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-policies\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940643 master-0 kubenswrapper[31456]: I0312 21:16:59.940458 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-cliconfig\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940643 master-0 kubenswrapper[31456]: I0312 21:16:59.940555 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-service-ca\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940643 master-0 kubenswrapper[31456]: I0312 21:16:59.940631 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-ocp-branding-template\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940880 master-0 kubenswrapper[31456]: I0312 21:16:59.940666 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-provider-selection\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940880 master-0 kubenswrapper[31456]: I0312 21:16:59.940730 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-trusted-ca-bundle\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940880 master-0 kubenswrapper[31456]: I0312 21:16:59.940789 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-session\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.940880 master-0 kubenswrapper[31456]: I0312 21:16:59.940859 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-router-certs\") pod \"739ac366-cbaa-4b39-a525-66c54c3802f0\" (UID: \"739ac366-cbaa-4b39-a525-66c54c3802f0\") " Mar 12 21:16:59.941434 master-0 kubenswrapper[31456]: I0312 21:16:59.941228 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmfsb\" (UniqueName: \"kubernetes.io/projected/e830bc5c-7934-4c73-9d8d-e31b27476705-kube-api-access-fmfsb\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.941725 master-0 kubenswrapper[31456]: I0312 21:16:59.941472 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.941725 master-0 kubenswrapper[31456]: I0312 21:16:59.941493 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.941744 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-error\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.941797 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e830bc5c-7934-4c73-9d8d-e31b27476705-audit-dir\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.941882 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-session\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942010 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-serving-cert\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942060 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942134 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-router-certs\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942222 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942279 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-login\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942343 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-audit-policies\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942389 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-cliconfig\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942387 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942450 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-service-ca\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.942724 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.943602 31456 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.943632 31456 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.943650 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.943665 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:16:59.945205 master-0 kubenswrapper[31456]: I0312 21:16:59.944882 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:16:59.947365 master-0 kubenswrapper[31456]: I0312 21:16:59.945371 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:59.947365 master-0 kubenswrapper[31456]: I0312 21:16:59.945502 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:59.947365 master-0 kubenswrapper[31456]: I0312 21:16:59.946554 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739ac366-cbaa-4b39-a525-66c54c3802f0-kube-api-access-rnr4t" (OuterVolumeSpecName: "kube-api-access-rnr4t") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "kube-api-access-rnr4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:16:59.947365 master-0 kubenswrapper[31456]: I0312 21:16:59.946872 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:59.948661 master-0 kubenswrapper[31456]: I0312 21:16:59.948593 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:59.949139 master-0 kubenswrapper[31456]: I0312 21:16:59.949076 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:59.949772 master-0 kubenswrapper[31456]: I0312 21:16:59.949692 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:16:59.952042 master-0 kubenswrapper[31456]: I0312 21:16:59.951971 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "739ac366-cbaa-4b39-a525-66c54c3802f0" (UID: "739ac366-cbaa-4b39-a525-66c54c3802f0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:17:00.045316 master-0 kubenswrapper[31456]: I0312 21:17:00.045095 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-audit-policies\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.045316 master-0 kubenswrapper[31456]: I0312 21:17:00.045188 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-cliconfig\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.045316 master-0 kubenswrapper[31456]: I0312 21:17:00.045231 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-service-ca\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046180 master-0 kubenswrapper[31456]: I0312 21:17:00.046115 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmfsb\" (UniqueName: \"kubernetes.io/projected/e830bc5c-7934-4c73-9d8d-e31b27476705-kube-api-access-fmfsb\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046583 master-0 kubenswrapper[31456]: I0312 21:17:00.046521 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-cliconfig\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046673 master-0 kubenswrapper[31456]: I0312 21:17:00.046618 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046748 master-0 kubenswrapper[31456]: I0312 21:17:00.046689 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-error\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046748 master-0 kubenswrapper[31456]: I0312 21:17:00.046730 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-session\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046944 master-0 kubenswrapper[31456]: I0312 21:17:00.046764 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e830bc5c-7934-4c73-9d8d-e31b27476705-audit-dir\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046944 master-0 kubenswrapper[31456]: I0312 21:17:00.046798 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-audit-policies\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.046944 master-0 kubenswrapper[31456]: I0312 21:17:00.046885 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-serving-cert\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047148 master-0 kubenswrapper[31456]: I0312 21:17:00.046958 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e830bc5c-7934-4c73-9d8d-e31b27476705-audit-dir\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047148 master-0 kubenswrapper[31456]: I0312 21:17:00.047001 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047148 master-0 kubenswrapper[31456]: I0312 21:17:00.047090 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-router-certs\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047468 master-0 kubenswrapper[31456]: I0312 21:17:00.047196 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047468 master-0 kubenswrapper[31456]: I0312 21:17:00.047239 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-login\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047468 master-0 kubenswrapper[31456]: I0312 21:17:00.047372 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.047468 master-0 kubenswrapper[31456]: I0312 21:17:00.047376 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-service-ca\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.047914 master-0 kubenswrapper[31456]: I0312 21:17:00.047396 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048040 master-0 kubenswrapper[31456]: I0312 21:17:00.047969 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048040 master-0 kubenswrapper[31456]: I0312 21:17:00.048011 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048040 master-0 kubenswrapper[31456]: I0312 21:17:00.048035 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048317 master-0 kubenswrapper[31456]: I0312 21:17:00.048058 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048317 master-0 kubenswrapper[31456]: I0312 21:17:00.048082 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048317 master-0 kubenswrapper[31456]: I0312 21:17:00.048103 31456 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/739ac366-cbaa-4b39-a525-66c54c3802f0-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048317 master-0 kubenswrapper[31456]: I0312 21:17:00.048126 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnr4t\" (UniqueName: \"kubernetes.io/projected/739ac366-cbaa-4b39-a525-66c54c3802f0-kube-api-access-rnr4t\") on node \"master-0\" DevicePath \"\"" Mar 12 21:17:00.048737 master-0 kubenswrapper[31456]: I0312 21:17:00.048329 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.054541 master-0 kubenswrapper[31456]: I0312 21:17:00.052610 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.054541 master-0 kubenswrapper[31456]: I0312 21:17:00.052753 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-login\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.054541 master-0 kubenswrapper[31456]: I0312 21:17:00.053638 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-router-certs\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.054541 master-0 kubenswrapper[31456]: I0312 21:17:00.054478 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-user-template-error\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.058939 master-0 kubenswrapper[31456]: I0312 21:17:00.058503 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.062506 master-0 kubenswrapper[31456]: I0312 21:17:00.061293 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-session\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.063270 master-0 kubenswrapper[31456]: I0312 21:17:00.063208 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e830bc5c-7934-4c73-9d8d-e31b27476705-v4-0-config-system-serving-cert\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.094839 master-0 kubenswrapper[31456]: I0312 21:17:00.094730 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmfsb\" (UniqueName: \"kubernetes.io/projected/e830bc5c-7934-4c73-9d8d-e31b27476705-kube-api-access-fmfsb\") pod \"oauth-openshift-99c875859-pv7xb\" (UID: \"e830bc5c-7934-4c73-9d8d-e31b27476705\") " pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.118172 master-0 kubenswrapper[31456]: I0312 21:17:00.118031 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:00.468469 master-0 kubenswrapper[31456]: I0312 21:17:00.468382 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" event={"ID":"739ac366-cbaa-4b39-a525-66c54c3802f0","Type":"ContainerDied","Data":"e87ef76b4e75a491ec9197f16aee1cbb14aca6be6347f9170f4efa30a562b5cb"} Mar 12 21:17:00.468469 master-0 kubenswrapper[31456]: I0312 21:17:00.468449 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c" Mar 12 21:17:00.468469 master-0 kubenswrapper[31456]: I0312 21:17:00.468470 31456 scope.go:117] "RemoveContainer" containerID="a7dbff18322dcdecfea58aaa7e321fa66b989f291e83524de7729657bb7e5cfa" Mar 12 21:17:00.530277 master-0 kubenswrapper[31456]: I0312 21:17:00.530170 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c"] Mar 12 21:17:00.542648 master-0 kubenswrapper[31456]: I0312 21:17:00.542580 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6ff7cb97b6-qjc7c"] Mar 12 21:17:00.629202 master-0 kubenswrapper[31456]: I0312 21:17:00.629128 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-99c875859-pv7xb"] Mar 12 21:17:01.182632 master-0 kubenswrapper[31456]: I0312 21:17:01.182467 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739ac366-cbaa-4b39-a525-66c54c3802f0" path="/var/lib/kubelet/pods/739ac366-cbaa-4b39-a525-66c54c3802f0/volumes" Mar 12 21:17:01.481133 master-0 kubenswrapper[31456]: I0312 21:17:01.480939 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" event={"ID":"e830bc5c-7934-4c73-9d8d-e31b27476705","Type":"ContainerStarted","Data":"6461cc875664bf3b02d9a7c8641fe1b38e239b3035eb7fb4d1138185fd388323"} Mar 12 21:17:01.481133 master-0 kubenswrapper[31456]: I0312 21:17:01.481009 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" event={"ID":"e830bc5c-7934-4c73-9d8d-e31b27476705","Type":"ContainerStarted","Data":"e6fadb0fa32720bca97c7cdc9b68bedbb1050f4103911be13dff2d3ed04050ef"} Mar 12 21:17:01.482911 master-0 kubenswrapper[31456]: I0312 21:17:01.482474 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:01.532301 master-0 kubenswrapper[31456]: I0312 21:17:01.532158 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" podStartSLOduration=27.532129573 podStartE2EDuration="27.532129573s" podCreationTimestamp="2026-03-12 21:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:17:01.522205591 +0000 UTC m=+482.596810969" watchObservedRunningTime="2026-03-12 21:17:01.532129573 +0000 UTC m=+482.606734941" Mar 12 21:17:01.623039 master-0 kubenswrapper[31456]: I0312 21:17:01.622897 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-99c875859-pv7xb" Mar 12 21:17:08.075392 master-0 kubenswrapper[31456]: I0312 21:17:08.075315 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-ptvsb"] Mar 12 21:17:08.076442 master-0 kubenswrapper[31456]: I0312 21:17:08.076395 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.080629 master-0 kubenswrapper[31456]: I0312 21:17:08.080576 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 12 21:17:08.081711 master-0 kubenswrapper[31456]: I0312 21:17:08.081665 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 12 21:17:08.082020 master-0 kubenswrapper[31456]: I0312 21:17:08.081986 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 12 21:17:08.082277 master-0 kubenswrapper[31456]: I0312 21:17:08.082243 31456 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 12 21:17:08.097351 master-0 kubenswrapper[31456]: I0312 21:17:08.097249 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-ptvsb"] Mar 12 21:17:08.118948 master-0 kubenswrapper[31456]: I0312 21:17:08.118859 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/418f109d-c5a7-4311-b90d-4f62478f3aba-sushy-emulator-config\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.119309 master-0 kubenswrapper[31456]: I0312 21:17:08.119014 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6jh\" (UniqueName: \"kubernetes.io/projected/418f109d-c5a7-4311-b90d-4f62478f3aba-kube-api-access-bn6jh\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.119309 master-0 kubenswrapper[31456]: I0312 21:17:08.119040 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/418f109d-c5a7-4311-b90d-4f62478f3aba-os-client-config\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.220663 master-0 kubenswrapper[31456]: I0312 21:17:08.220607 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6jh\" (UniqueName: \"kubernetes.io/projected/418f109d-c5a7-4311-b90d-4f62478f3aba-kube-api-access-bn6jh\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.221040 master-0 kubenswrapper[31456]: I0312 21:17:08.221019 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/418f109d-c5a7-4311-b90d-4f62478f3aba-os-client-config\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.221194 master-0 kubenswrapper[31456]: I0312 21:17:08.221176 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/418f109d-c5a7-4311-b90d-4f62478f3aba-sushy-emulator-config\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.223044 master-0 kubenswrapper[31456]: I0312 21:17:08.222952 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/418f109d-c5a7-4311-b90d-4f62478f3aba-sushy-emulator-config\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.227020 master-0 kubenswrapper[31456]: I0312 21:17:08.226955 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/418f109d-c5a7-4311-b90d-4f62478f3aba-os-client-config\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.253403 master-0 kubenswrapper[31456]: I0312 21:17:08.253302 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6jh\" (UniqueName: \"kubernetes.io/projected/418f109d-c5a7-4311-b90d-4f62478f3aba-kube-api-access-bn6jh\") pod \"sushy-emulator-6dd6777c94-ptvsb\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:08.427161 master-0 kubenswrapper[31456]: I0312 21:17:08.427014 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:09.004387 master-0 kubenswrapper[31456]: I0312 21:17:09.004297 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-ptvsb"] Mar 12 21:17:09.007064 master-0 kubenswrapper[31456]: W0312 21:17:09.006967 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418f109d_c5a7_4311_b90d_4f62478f3aba.slice/crio-25be3904f6ee43aca877599385df3ba6090d9e495b87521df7edc191b0b00ebf WatchSource:0}: Error finding container 25be3904f6ee43aca877599385df3ba6090d9e495b87521df7edc191b0b00ebf: Status 404 returned error can't find the container with id 25be3904f6ee43aca877599385df3ba6090d9e495b87521df7edc191b0b00ebf Mar 12 21:17:09.560373 master-0 kubenswrapper[31456]: I0312 21:17:09.560292 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" event={"ID":"418f109d-c5a7-4311-b90d-4f62478f3aba","Type":"ContainerStarted","Data":"25be3904f6ee43aca877599385df3ba6090d9e495b87521df7edc191b0b00ebf"} Mar 12 21:17:10.582545 master-0 kubenswrapper[31456]: I0312 21:17:10.581857 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:17:10.616934 master-0 kubenswrapper[31456]: I0312 21:17:10.616871 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:17:11.606394 master-0 kubenswrapper[31456]: I0312 21:17:11.606341 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 12 21:17:20.683517 master-0 kubenswrapper[31456]: I0312 21:17:20.683444 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" event={"ID":"418f109d-c5a7-4311-b90d-4f62478f3aba","Type":"ContainerStarted","Data":"be6536a60dd6fc876d7d431d08a057cea01e6fa5e3d461d5944b279f6924fceb"} Mar 12 21:17:20.719838 master-0 kubenswrapper[31456]: I0312 21:17:20.719176 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" podStartSLOduration=2.02367126 podStartE2EDuration="12.719140842s" podCreationTimestamp="2026-03-12 21:17:08 +0000 UTC" firstStartedPulling="2026-03-12 21:17:09.010534581 +0000 UTC m=+490.085139939" lastFinishedPulling="2026-03-12 21:17:19.706004183 +0000 UTC m=+500.780609521" observedRunningTime="2026-03-12 21:17:20.704744901 +0000 UTC m=+501.779350269" watchObservedRunningTime="2026-03-12 21:17:20.719140842 +0000 UTC m=+501.793746240" Mar 12 21:17:28.427470 master-0 kubenswrapper[31456]: I0312 21:17:28.427372 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:28.427470 master-0 kubenswrapper[31456]: I0312 21:17:28.427446 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:28.442615 master-0 kubenswrapper[31456]: I0312 21:17:28.442567 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:28.769431 master-0 kubenswrapper[31456]: I0312 21:17:28.769226 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:17:46.587787 master-0 kubenswrapper[31456]: I0312 21:17:46.587665 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 21:17:46.589370 master-0 kubenswrapper[31456]: I0312 21:17:46.589219 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.593123 master-0 kubenswrapper[31456]: I0312 21:17:46.591987 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-xq8cf" Mar 12 21:17:46.593123 master-0 kubenswrapper[31456]: I0312 21:17:46.592546 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 21:17:46.601283 master-0 kubenswrapper[31456]: I0312 21:17:46.601188 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.601523 master-0 kubenswrapper[31456]: I0312 21:17:46.601466 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kube-api-access\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.601697 master-0 kubenswrapper[31456]: I0312 21:17:46.601658 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-var-lock\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.605715 master-0 kubenswrapper[31456]: I0312 21:17:46.605516 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 21:17:46.704184 master-0 kubenswrapper[31456]: I0312 21:17:46.704081 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-var-lock\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.704579 master-0 kubenswrapper[31456]: I0312 21:17:46.704317 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.704579 master-0 kubenswrapper[31456]: I0312 21:17:46.704482 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kube-api-access\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.704780 master-0 kubenswrapper[31456]: I0312 21:17:46.704558 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-var-lock\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.704780 master-0 kubenswrapper[31456]: I0312 21:17:46.704594 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.729017 master-0 kubenswrapper[31456]: I0312 21:17:46.728844 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kube-api-access\") pod \"installer-4-master-0\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:46.926779 master-0 kubenswrapper[31456]: I0312 21:17:46.926536 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:17:47.107276 master-0 kubenswrapper[31456]: I0312 21:17:47.103381 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-958d4c449-pxhxt"] Mar 12 21:17:47.107276 master-0 kubenswrapper[31456]: I0312 21:17:47.105632 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.121567 master-0 kubenswrapper[31456]: I0312 21:17:47.121497 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-958d4c449-pxhxt"] Mar 12 21:17:47.124642 master-0 kubenswrapper[31456]: I0312 21:17:47.122853 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvg7g\" (UniqueName: \"kubernetes.io/projected/2ce011fd-5a6a-46d5-90ce-0ce335259606-kube-api-access-mvg7g\") pod \"nova-console-poller-958d4c449-pxhxt\" (UID: \"2ce011fd-5a6a-46d5-90ce-0ce335259606\") " pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.124642 master-0 kubenswrapper[31456]: I0312 21:17:47.123076 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2ce011fd-5a6a-46d5-90ce-0ce335259606-os-client-config\") pod \"nova-console-poller-958d4c449-pxhxt\" (UID: \"2ce011fd-5a6a-46d5-90ce-0ce335259606\") " pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.225373 master-0 kubenswrapper[31456]: I0312 21:17:47.225327 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2ce011fd-5a6a-46d5-90ce-0ce335259606-os-client-config\") pod \"nova-console-poller-958d4c449-pxhxt\" (UID: \"2ce011fd-5a6a-46d5-90ce-0ce335259606\") " pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.225724 master-0 kubenswrapper[31456]: I0312 21:17:47.225699 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvg7g\" (UniqueName: \"kubernetes.io/projected/2ce011fd-5a6a-46d5-90ce-0ce335259606-kube-api-access-mvg7g\") pod \"nova-console-poller-958d4c449-pxhxt\" (UID: \"2ce011fd-5a6a-46d5-90ce-0ce335259606\") " pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.229231 master-0 kubenswrapper[31456]: I0312 21:17:47.229183 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2ce011fd-5a6a-46d5-90ce-0ce335259606-os-client-config\") pod \"nova-console-poller-958d4c449-pxhxt\" (UID: \"2ce011fd-5a6a-46d5-90ce-0ce335259606\") " pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.256222 master-0 kubenswrapper[31456]: I0312 21:17:47.256121 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvg7g\" (UniqueName: \"kubernetes.io/projected/2ce011fd-5a6a-46d5-90ce-0ce335259606-kube-api-access-mvg7g\") pod \"nova-console-poller-958d4c449-pxhxt\" (UID: \"2ce011fd-5a6a-46d5-90ce-0ce335259606\") " pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.447704 master-0 kubenswrapper[31456]: I0312 21:17:47.446858 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" Mar 12 21:17:47.532537 master-0 kubenswrapper[31456]: I0312 21:17:47.532478 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 12 21:17:47.542104 master-0 kubenswrapper[31456]: W0312 21:17:47.538806 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod76b1c407_82f5_40fb_a542_cb6f3cbb41ba.slice/crio-52331e4898f24a2dcf4a9d5508e2028d7b49ec033a0e36614f9966d4369d78e6 WatchSource:0}: Error finding container 52331e4898f24a2dcf4a9d5508e2028d7b49ec033a0e36614f9966d4369d78e6: Status 404 returned error can't find the container with id 52331e4898f24a2dcf4a9d5508e2028d7b49ec033a0e36614f9966d4369d78e6 Mar 12 21:17:47.947030 master-0 kubenswrapper[31456]: I0312 21:17:47.946961 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-958d4c449-pxhxt"] Mar 12 21:17:47.957312 master-0 kubenswrapper[31456]: W0312 21:17:47.957249 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce011fd_5a6a_46d5_90ce_0ce335259606.slice/crio-5874b24a9445295190c1671f40dfa49e8bac3c966c418ea747ac42087c738c96 WatchSource:0}: Error finding container 5874b24a9445295190c1671f40dfa49e8bac3c966c418ea747ac42087c738c96: Status 404 returned error can't find the container with id 5874b24a9445295190c1671f40dfa49e8bac3c966c418ea747ac42087c738c96 Mar 12 21:17:47.960041 master-0 kubenswrapper[31456]: I0312 21:17:47.959957 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"76b1c407-82f5-40fb-a542-cb6f3cbb41ba","Type":"ContainerStarted","Data":"52331e4898f24a2dcf4a9d5508e2028d7b49ec033a0e36614f9966d4369d78e6"} Mar 12 21:17:48.969881 master-0 kubenswrapper[31456]: I0312 21:17:48.969694 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"76b1c407-82f5-40fb-a542-cb6f3cbb41ba","Type":"ContainerStarted","Data":"7b0efead690d0dd1163e020ad3f14ae4a7f7fc6cb5ec3279e9b1576d6df4d4e3"} Mar 12 21:17:48.973022 master-0 kubenswrapper[31456]: I0312 21:17:48.972937 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" event={"ID":"2ce011fd-5a6a-46d5-90ce-0ce335259606","Type":"ContainerStarted","Data":"5874b24a9445295190c1671f40dfa49e8bac3c966c418ea747ac42087c738c96"} Mar 12 21:17:48.992515 master-0 kubenswrapper[31456]: I0312 21:17:48.992400 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.9923840630000003 podStartE2EDuration="2.992384063s" podCreationTimestamp="2026-03-12 21:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:17:48.985935767 +0000 UTC m=+530.060541095" watchObservedRunningTime="2026-03-12 21:17:48.992384063 +0000 UTC m=+530.066989391" Mar 12 21:17:54.027553 master-0 kubenswrapper[31456]: I0312 21:17:54.027341 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" event={"ID":"2ce011fd-5a6a-46d5-90ce-0ce335259606","Type":"ContainerStarted","Data":"5c9cc6802fe263ebb74787a216d2c7f9bd46f51d9c32fc177483d35f1085b6e1"} Mar 12 21:17:55.039692 master-0 kubenswrapper[31456]: I0312 21:17:55.039602 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" event={"ID":"2ce011fd-5a6a-46d5-90ce-0ce335259606","Type":"ContainerStarted","Data":"10b56d4610fbe001bf8da2e4c4593bdeef0ac5cf1b4a04cf6d977e3197dbba55"} Mar 12 21:18:21.197318 master-0 kubenswrapper[31456]: I0312 21:18:21.197193 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-958d4c449-pxhxt" podStartSLOduration=27.89157732 podStartE2EDuration="34.197155512s" podCreationTimestamp="2026-03-12 21:17:47 +0000 UTC" firstStartedPulling="2026-03-12 21:17:47.962075414 +0000 UTC m=+529.036680752" lastFinishedPulling="2026-03-12 21:17:54.267653576 +0000 UTC m=+535.342258944" observedRunningTime="2026-03-12 21:17:55.073245352 +0000 UTC m=+536.147850720" watchObservedRunningTime="2026-03-12 21:18:21.197155512 +0000 UTC m=+562.271760880" Mar 12 21:18:21.198505 master-0 kubenswrapper[31456]: I0312 21:18:21.197960 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-5f59669bc7-h4j98"] Mar 12 21:18:21.199725 master-0 kubenswrapper[31456]: I0312 21:18:21.199651 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.247711 master-0 kubenswrapper[31456]: I0312 21:18:21.246704 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5f59669bc7-h4j98"] Mar 12 21:18:21.249529 master-0 kubenswrapper[31456]: I0312 21:18:21.249486 31456 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:18:21.249740 master-0 kubenswrapper[31456]: I0312 21:18:21.249684 31456 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:18:21.250454 master-0 kubenswrapper[31456]: I0312 21:18:21.250389 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://1d02987cfd443da7225f0df6b3ab9f45e0b88c2171ab5627f4e3845fc50178ec" gracePeriod=30 Mar 12 21:18:21.250565 master-0 kubenswrapper[31456]: I0312 21:18:21.250382 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" containerID="cri-o://0b060c904cf7244304798fca1e2e5fa54709b958c12481b7403d731a220633b8" gracePeriod=30 Mar 12 21:18:21.250565 master-0 kubenswrapper[31456]: I0312 21:18:21.250405 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://aadc37b9873c997339d04dc5e3aaeecb47d5f57228484f7cca80ac879f4002d2" gracePeriod=30 Mar 12 21:18:21.250664 master-0 kubenswrapper[31456]: I0312 21:18:21.250382 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" containerID="cri-o://b626b2974550fdcabce6b08a32cc3b1da47078dee2fd1671f52a14cd3557b052" gracePeriod=30 Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: E0312 21:18:21.251600 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-cert-syncer" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: I0312 21:18:21.251624 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-cert-syncer" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: E0312 21:18:21.251639 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: I0312 21:18:21.251648 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: E0312 21:18:21.251668 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-recovery-controller" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: I0312 21:18:21.251677 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-recovery-controller" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: E0312 21:18:21.251717 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: I0312 21:18:21.251726 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: E0312 21:18:21.251773 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: I0312 21:18:21.251783 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: E0312 21:18:21.251932 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252107 master-0 kubenswrapper[31456]: I0312 21:18:21.251965 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252281 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-recovery-controller" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252305 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252324 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252347 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252366 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager-cert-syncer" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252410 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252427 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="cluster-policy-controller" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: E0312 21:18:21.252717 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.252868 master-0 kubenswrapper[31456]: I0312 21:18:21.252733 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" containerName="kube-controller-manager" Mar 12 21:18:21.302061 master-0 kubenswrapper[31456]: I0312 21:18:21.302022 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/006929ab-bb43-489d-99b4-0844da59094b-os-client-config\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.404834 master-0 kubenswrapper[31456]: I0312 21:18:21.404116 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/496fae4ecf26c64dab8ba172b8010a97-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"496fae4ecf26c64dab8ba172b8010a97\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.404834 master-0 kubenswrapper[31456]: I0312 21:18:21.404246 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/006929ab-bb43-489d-99b4-0844da59094b-os-client-config\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.404834 master-0 kubenswrapper[31456]: I0312 21:18:21.404566 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/006929ab-bb43-489d-99b4-0844da59094b-nova-console-recordings-pv\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.404834 master-0 kubenswrapper[31456]: I0312 21:18:21.404663 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/496fae4ecf26c64dab8ba172b8010a97-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"496fae4ecf26c64dab8ba172b8010a97\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.404834 master-0 kubenswrapper[31456]: I0312 21:18:21.404689 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh4l\" (UniqueName: \"kubernetes.io/projected/006929ab-bb43-489d-99b4-0844da59094b-kube-api-access-2bh4l\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.408548 master-0 kubenswrapper[31456]: I0312 21:18:21.408502 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/006929ab-bb43-489d-99b4-0844da59094b-os-client-config\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.433642 master-0 kubenswrapper[31456]: I0312 21:18:21.433573 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/1.log" Mar 12 21:18:21.435408 master-0 kubenswrapper[31456]: I0312 21:18:21.435358 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:18:21.437240 master-0 kubenswrapper[31456]: I0312 21:18:21.437187 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager-cert-syncer/0.log" Mar 12 21:18:21.437421 master-0 kubenswrapper[31456]: I0312 21:18:21.437325 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.440893 master-0 kubenswrapper[31456]: I0312 21:18:21.440784 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7678a2e61b792fe3be55b1c6f67b2aa2" podUID="496fae4ecf26c64dab8ba172b8010a97" Mar 12 21:18:21.506311 master-0 kubenswrapper[31456]: I0312 21:18:21.506258 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/496fae4ecf26c64dab8ba172b8010a97-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"496fae4ecf26c64dab8ba172b8010a97\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.506655 master-0 kubenswrapper[31456]: I0312 21:18:21.506417 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/496fae4ecf26c64dab8ba172b8010a97-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"496fae4ecf26c64dab8ba172b8010a97\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.506755 master-0 kubenswrapper[31456]: I0312 21:18:21.506732 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/006929ab-bb43-489d-99b4-0844da59094b-nova-console-recordings-pv\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.506975 master-0 kubenswrapper[31456]: I0312 21:18:21.506957 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/496fae4ecf26c64dab8ba172b8010a97-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"496fae4ecf26c64dab8ba172b8010a97\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.507110 master-0 kubenswrapper[31456]: I0312 21:18:21.507092 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh4l\" (UniqueName: \"kubernetes.io/projected/006929ab-bb43-489d-99b4-0844da59094b-kube-api-access-2bh4l\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.507235 master-0 kubenswrapper[31456]: I0312 21:18:21.507042 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/496fae4ecf26c64dab8ba172b8010a97-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"496fae4ecf26c64dab8ba172b8010a97\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:21.524232 master-0 kubenswrapper[31456]: I0312 21:18:21.524199 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bh4l\" (UniqueName: \"kubernetes.io/projected/006929ab-bb43-489d-99b4-0844da59094b-kube-api-access-2bh4l\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:21.608613 master-0 kubenswrapper[31456]: I0312 21:18:21.608527 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") pod \"7678a2e61b792fe3be55b1c6f67b2aa2\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " Mar 12 21:18:21.608954 master-0 kubenswrapper[31456]: I0312 21:18:21.608762 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") pod \"7678a2e61b792fe3be55b1c6f67b2aa2\" (UID: \"7678a2e61b792fe3be55b1c6f67b2aa2\") " Mar 12 21:18:21.609080 master-0 kubenswrapper[31456]: I0312 21:18:21.608955 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7678a2e61b792fe3be55b1c6f67b2aa2" (UID: "7678a2e61b792fe3be55b1c6f67b2aa2"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:18:21.609237 master-0 kubenswrapper[31456]: I0312 21:18:21.609207 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7678a2e61b792fe3be55b1c6f67b2aa2" (UID: "7678a2e61b792fe3be55b1c6f67b2aa2"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:18:21.609506 master-0 kubenswrapper[31456]: I0312 21:18:21.609485 31456 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:18:21.609594 master-0 kubenswrapper[31456]: I0312 21:18:21.609581 31456 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7678a2e61b792fe3be55b1c6f67b2aa2-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:18:22.162919 master-0 kubenswrapper[31456]: I0312 21:18:22.162753 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/006929ab-bb43-489d-99b4-0844da59094b-nova-console-recordings-pv\") pod \"nova-console-recorder-5f59669bc7-h4j98\" (UID: \"006929ab-bb43-489d-99b4-0844da59094b\") " pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:22.315283 master-0 kubenswrapper[31456]: I0312 21:18:22.315214 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager/1.log" Mar 12 21:18:22.316423 master-0 kubenswrapper[31456]: I0312 21:18:22.316353 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/cluster-policy-controller/5.log" Mar 12 21:18:22.317504 master-0 kubenswrapper[31456]: I0312 21:18:22.317465 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager-cert-syncer/0.log" Mar 12 21:18:22.317613 master-0 kubenswrapper[31456]: I0312 21:18:22.317531 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="0b060c904cf7244304798fca1e2e5fa54709b958c12481b7403d731a220633b8" exitCode=0 Mar 12 21:18:22.317613 master-0 kubenswrapper[31456]: I0312 21:18:22.317552 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="b626b2974550fdcabce6b08a32cc3b1da47078dee2fd1671f52a14cd3557b052" exitCode=0 Mar 12 21:18:22.317613 master-0 kubenswrapper[31456]: I0312 21:18:22.317563 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="aadc37b9873c997339d04dc5e3aaeecb47d5f57228484f7cca80ac879f4002d2" exitCode=0 Mar 12 21:18:22.317613 master-0 kubenswrapper[31456]: I0312 21:18:22.317574 31456 generic.go:334] "Generic (PLEG): container finished" podID="7678a2e61b792fe3be55b1c6f67b2aa2" containerID="1d02987cfd443da7225f0df6b3ab9f45e0b88c2171ab5627f4e3845fc50178ec" exitCode=2 Mar 12 21:18:22.317613 master-0 kubenswrapper[31456]: I0312 21:18:22.317611 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf1fca480b54d4cfe929b5e83abff120bff7b90a008395758afbaeaea08fe4d6" Mar 12 21:18:22.318161 master-0 kubenswrapper[31456]: I0312 21:18:22.317631 31456 scope.go:117] "RemoveContainer" containerID="d60d46e4b651aaa6fc0f310f1cd525f43bd8602c132272870fb17e4bead2dcb6" Mar 12 21:18:22.318161 master-0 kubenswrapper[31456]: I0312 21:18:22.317719 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:22.321752 master-0 kubenswrapper[31456]: I0312 21:18:22.321697 31456 generic.go:334] "Generic (PLEG): container finished" podID="76b1c407-82f5-40fb-a542-cb6f3cbb41ba" containerID="7b0efead690d0dd1163e020ad3f14ae4a7f7fc6cb5ec3279e9b1576d6df4d4e3" exitCode=0 Mar 12 21:18:22.321752 master-0 kubenswrapper[31456]: I0312 21:18:22.321735 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"76b1c407-82f5-40fb-a542-cb6f3cbb41ba","Type":"ContainerDied","Data":"7b0efead690d0dd1163e020ad3f14ae4a7f7fc6cb5ec3279e9b1576d6df4d4e3"} Mar 12 21:18:22.322379 master-0 kubenswrapper[31456]: I0312 21:18:22.322265 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7678a2e61b792fe3be55b1c6f67b2aa2" podUID="496fae4ecf26c64dab8ba172b8010a97" Mar 12 21:18:22.339613 master-0 kubenswrapper[31456]: I0312 21:18:22.339552 31456 scope.go:117] "RemoveContainer" containerID="a2a7bc20f4e9a2a1af8d7434056bb400f5ad20ab8f0474397aa76de25d0db770" Mar 12 21:18:22.355489 master-0 kubenswrapper[31456]: I0312 21:18:22.355422 31456 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="7678a2e61b792fe3be55b1c6f67b2aa2" podUID="496fae4ecf26c64dab8ba172b8010a97" Mar 12 21:18:22.453192 master-0 kubenswrapper[31456]: I0312 21:18:22.453092 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" Mar 12 21:18:22.978691 master-0 kubenswrapper[31456]: W0312 21:18:22.978621 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006929ab_bb43_489d_99b4_0844da59094b.slice/crio-c9234afe2f29ba302343e703fc84d8dad488ee134a8c4d0cb3775d8f61fb168a WatchSource:0}: Error finding container c9234afe2f29ba302343e703fc84d8dad488ee134a8c4d0cb3775d8f61fb168a: Status 404 returned error can't find the container with id c9234afe2f29ba302343e703fc84d8dad488ee134a8c4d0cb3775d8f61fb168a Mar 12 21:18:22.981106 master-0 kubenswrapper[31456]: I0312 21:18:22.981016 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5f59669bc7-h4j98"] Mar 12 21:18:23.190780 master-0 kubenswrapper[31456]: I0312 21:18:23.190670 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7678a2e61b792fe3be55b1c6f67b2aa2" path="/var/lib/kubelet/pods/7678a2e61b792fe3be55b1c6f67b2aa2/volumes" Mar 12 21:18:23.335629 master-0 kubenswrapper[31456]: I0312 21:18:23.335533 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_7678a2e61b792fe3be55b1c6f67b2aa2/kube-controller-manager-cert-syncer/0.log" Mar 12 21:18:23.337220 master-0 kubenswrapper[31456]: I0312 21:18:23.337154 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" event={"ID":"006929ab-bb43-489d-99b4-0844da59094b","Type":"ContainerStarted","Data":"c9234afe2f29ba302343e703fc84d8dad488ee134a8c4d0cb3775d8f61fb168a"} Mar 12 21:18:23.797993 master-0 kubenswrapper[31456]: I0312 21:18:23.797610 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:18:23.953540 master-0 kubenswrapper[31456]: I0312 21:18:23.953284 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-var-lock\") pod \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " Mar 12 21:18:23.953540 master-0 kubenswrapper[31456]: I0312 21:18:23.953391 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kube-api-access\") pod \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " Mar 12 21:18:23.953540 master-0 kubenswrapper[31456]: I0312 21:18:23.953406 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-var-lock" (OuterVolumeSpecName: "var-lock") pod "76b1c407-82f5-40fb-a542-cb6f3cbb41ba" (UID: "76b1c407-82f5-40fb-a542-cb6f3cbb41ba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:18:23.953540 master-0 kubenswrapper[31456]: I0312 21:18:23.953430 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kubelet-dir\") pod \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\" (UID: \"76b1c407-82f5-40fb-a542-cb6f3cbb41ba\") " Mar 12 21:18:23.953540 master-0 kubenswrapper[31456]: I0312 21:18:23.953484 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76b1c407-82f5-40fb-a542-cb6f3cbb41ba" (UID: "76b1c407-82f5-40fb-a542-cb6f3cbb41ba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:18:23.954262 master-0 kubenswrapper[31456]: I0312 21:18:23.954199 31456 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 12 21:18:23.954262 master-0 kubenswrapper[31456]: I0312 21:18:23.954245 31456 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:18:23.959344 master-0 kubenswrapper[31456]: I0312 21:18:23.959277 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76b1c407-82f5-40fb-a542-cb6f3cbb41ba" (UID: "76b1c407-82f5-40fb-a542-cb6f3cbb41ba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:18:24.055723 master-0 kubenswrapper[31456]: I0312 21:18:24.055567 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76b1c407-82f5-40fb-a542-cb6f3cbb41ba-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 12 21:18:24.347120 master-0 kubenswrapper[31456]: I0312 21:18:24.347060 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"76b1c407-82f5-40fb-a542-cb6f3cbb41ba","Type":"ContainerDied","Data":"52331e4898f24a2dcf4a9d5508e2028d7b49ec033a0e36614f9966d4369d78e6"} Mar 12 21:18:24.347120 master-0 kubenswrapper[31456]: I0312 21:18:24.347110 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52331e4898f24a2dcf4a9d5508e2028d7b49ec033a0e36614f9966d4369d78e6" Mar 12 21:18:24.348161 master-0 kubenswrapper[31456]: I0312 21:18:24.347136 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 12 21:18:31.413555 master-0 kubenswrapper[31456]: I0312 21:18:31.412615 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" event={"ID":"006929ab-bb43-489d-99b4-0844da59094b","Type":"ContainerStarted","Data":"bfe3e92b0299ebb18844ad9761ed9985993d9180fe2267f7e6a5b48b7bc902c7"} Mar 12 21:18:32.426728 master-0 kubenswrapper[31456]: I0312 21:18:32.426560 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" event={"ID":"006929ab-bb43-489d-99b4-0844da59094b","Type":"ContainerStarted","Data":"7ad2e4e09266a7aecee9ed7cf1aa63ebaa240c747e2db3138b819acfab5a29d7"} Mar 12 21:18:32.463330 master-0 kubenswrapper[31456]: I0312 21:18:32.463182 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-5f59669bc7-h4j98" podStartSLOduration=2.578811134 podStartE2EDuration="11.463153715s" podCreationTimestamp="2026-03-12 21:18:21 +0000 UTC" firstStartedPulling="2026-03-12 21:18:22.980679829 +0000 UTC m=+564.055285187" lastFinishedPulling="2026-03-12 21:18:31.8650224 +0000 UTC m=+572.939627768" observedRunningTime="2026-03-12 21:18:32.45027756 +0000 UTC m=+573.524882938" watchObservedRunningTime="2026-03-12 21:18:32.463153715 +0000 UTC m=+573.537759083" Mar 12 21:18:33.169038 master-0 kubenswrapper[31456]: I0312 21:18:33.168906 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:33.192861 master-0 kubenswrapper[31456]: I0312 21:18:33.192767 31456 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7a2e1216-049b-4709-8245-c516871fe907" Mar 12 21:18:33.192861 master-0 kubenswrapper[31456]: I0312 21:18:33.192867 31456 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="7a2e1216-049b-4709-8245-c516871fe907" Mar 12 21:18:33.222857 master-0 kubenswrapper[31456]: I0312 21:18:33.221169 31456 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:33.225014 master-0 kubenswrapper[31456]: I0312 21:18:33.224398 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:18:33.235980 master-0 kubenswrapper[31456]: I0312 21:18:33.235886 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:18:33.244972 master-0 kubenswrapper[31456]: I0312 21:18:33.244888 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:33.254529 master-0 kubenswrapper[31456]: I0312 21:18:33.254456 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 12 21:18:33.283606 master-0 kubenswrapper[31456]: W0312 21:18:33.283512 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod496fae4ecf26c64dab8ba172b8010a97.slice/crio-2fbc993beace5624d4ff457ac2262a66db4c4ce7679b82048fb9bf3b4b2e84ad WatchSource:0}: Error finding container 2fbc993beace5624d4ff457ac2262a66db4c4ce7679b82048fb9bf3b4b2e84ad: Status 404 returned error can't find the container with id 2fbc993beace5624d4ff457ac2262a66db4c4ce7679b82048fb9bf3b4b2e84ad Mar 12 21:18:33.436728 master-0 kubenswrapper[31456]: I0312 21:18:33.436536 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"496fae4ecf26c64dab8ba172b8010a97","Type":"ContainerStarted","Data":"2fbc993beace5624d4ff457ac2262a66db4c4ce7679b82048fb9bf3b4b2e84ad"} Mar 12 21:18:34.450447 master-0 kubenswrapper[31456]: I0312 21:18:34.450394 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"496fae4ecf26c64dab8ba172b8010a97","Type":"ContainerStarted","Data":"a32c5b9402b2371c748d1d62216fbede733001c4864a5b2f3ac62fb3e145d979"} Mar 12 21:18:34.450842 master-0 kubenswrapper[31456]: I0312 21:18:34.450461 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"496fae4ecf26c64dab8ba172b8010a97","Type":"ContainerStarted","Data":"53fd9364db2c7e40ad39df20696223daf762d6ffcee75bd08daa137d5f83939b"} Mar 12 21:18:34.450842 master-0 kubenswrapper[31456]: I0312 21:18:34.450481 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"496fae4ecf26c64dab8ba172b8010a97","Type":"ContainerStarted","Data":"8ae7de83d3e823e9cedc8c11283343db9e1e75d028449713f5f762b9ea8ac4d3"} Mar 12 21:18:35.467091 master-0 kubenswrapper[31456]: I0312 21:18:35.467004 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"496fae4ecf26c64dab8ba172b8010a97","Type":"ContainerStarted","Data":"205214c562be263c03bda431f0ab4b92dfd095eea2c2f0a5104ec21a6b7589fa"} Mar 12 21:18:35.518429 master-0 kubenswrapper[31456]: I0312 21:18:35.518211 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.518177686 podStartE2EDuration="2.518177686s" podCreationTimestamp="2026-03-12 21:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:18:35.511607075 +0000 UTC m=+576.586212453" watchObservedRunningTime="2026-03-12 21:18:35.518177686 +0000 UTC m=+576.592783064" Mar 12 21:18:43.246265 master-0 kubenswrapper[31456]: I0312 21:18:43.246082 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:43.247439 master-0 kubenswrapper[31456]: I0312 21:18:43.246863 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:43.247969 master-0 kubenswrapper[31456]: I0312 21:18:43.247785 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:43.247969 master-0 kubenswrapper[31456]: I0312 21:18:43.247857 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:43.248378 master-0 kubenswrapper[31456]: I0312 21:18:43.248326 31456 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 21:18:43.248490 master-0 kubenswrapper[31456]: I0312 21:18:43.248396 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="496fae4ecf26c64dab8ba172b8010a97" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 21:18:43.252833 master-0 kubenswrapper[31456]: I0312 21:18:43.252756 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:43.563219 master-0 kubenswrapper[31456]: I0312 21:18:43.563017 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:18:53.245782 master-0 kubenswrapper[31456]: I0312 21:18:53.245695 31456 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 12 21:18:53.247094 master-0 kubenswrapper[31456]: I0312 21:18:53.245846 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="496fae4ecf26c64dab8ba172b8010a97" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 12 21:18:59.659092 master-0 kubenswrapper[31456]: I0312 21:18:59.659026 31456 scope.go:117] "RemoveContainer" containerID="aadc37b9873c997339d04dc5e3aaeecb47d5f57228484f7cca80ac879f4002d2" Mar 12 21:18:59.674683 master-0 kubenswrapper[31456]: I0312 21:18:59.674640 31456 scope.go:117] "RemoveContainer" containerID="b626b2974550fdcabce6b08a32cc3b1da47078dee2fd1671f52a14cd3557b052" Mar 12 21:18:59.694610 master-0 kubenswrapper[31456]: I0312 21:18:59.694554 31456 scope.go:117] "RemoveContainer" containerID="1d02987cfd443da7225f0df6b3ab9f45e0b88c2171ab5627f4e3845fc50178ec" Mar 12 21:19:03.252481 master-0 kubenswrapper[31456]: I0312 21:19:03.252376 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:19:03.259124 master-0 kubenswrapper[31456]: I0312 21:19:03.259058 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 12 21:19:11.454506 master-0 kubenswrapper[31456]: I0312 21:19:11.454432 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg"] Mar 12 21:19:11.455268 master-0 kubenswrapper[31456]: E0312 21:19:11.454771 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b1c407-82f5-40fb-a542-cb6f3cbb41ba" containerName="installer" Mar 12 21:19:11.455268 master-0 kubenswrapper[31456]: I0312 21:19:11.454788 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b1c407-82f5-40fb-a542-cb6f3cbb41ba" containerName="installer" Mar 12 21:19:11.455268 master-0 kubenswrapper[31456]: I0312 21:19:11.455018 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b1c407-82f5-40fb-a542-cb6f3cbb41ba" containerName="installer" Mar 12 21:19:11.456241 master-0 kubenswrapper[31456]: I0312 21:19:11.456207 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.466698 master-0 kubenswrapper[31456]: I0312 21:19:11.466643 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vtgkl" Mar 12 21:19:11.489847 master-0 kubenswrapper[31456]: I0312 21:19:11.486930 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg"] Mar 12 21:19:11.545139 master-0 kubenswrapper[31456]: I0312 21:19:11.545072 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqshz\" (UniqueName: \"kubernetes.io/projected/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-kube-api-access-hqshz\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.545139 master-0 kubenswrapper[31456]: I0312 21:19:11.545144 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.545393 master-0 kubenswrapper[31456]: I0312 21:19:11.545196 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.646310 master-0 kubenswrapper[31456]: I0312 21:19:11.646243 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.646521 master-0 kubenswrapper[31456]: I0312 21:19:11.646456 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.646574 master-0 kubenswrapper[31456]: I0312 21:19:11.646554 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqshz\" (UniqueName: \"kubernetes.io/projected/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-kube-api-access-hqshz\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.646868 master-0 kubenswrapper[31456]: I0312 21:19:11.646786 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.646950 master-0 kubenswrapper[31456]: I0312 21:19:11.646912 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.666553 master-0 kubenswrapper[31456]: I0312 21:19:11.666517 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqshz\" (UniqueName: \"kubernetes.io/projected/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-kube-api-access-hqshz\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:11.771652 master-0 kubenswrapper[31456]: I0312 21:19:11.771490 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:12.304182 master-0 kubenswrapper[31456]: I0312 21:19:12.304093 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg"] Mar 12 21:19:12.309556 master-0 kubenswrapper[31456]: W0312 21:19:12.309501 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ecdea6c_95d9_4c09_aaa0_3979b74c2835.slice/crio-44029267d74ac584aaabb566c675fbded783fd1867fff954b1adaab7612caf5f WatchSource:0}: Error finding container 44029267d74ac584aaabb566c675fbded783fd1867fff954b1adaab7612caf5f: Status 404 returned error can't find the container with id 44029267d74ac584aaabb566c675fbded783fd1867fff954b1adaab7612caf5f Mar 12 21:19:12.848536 master-0 kubenswrapper[31456]: I0312 21:19:12.848455 31456 generic.go:334] "Generic (PLEG): container finished" podID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerID="a4fbdb91816584187eddb4ceb366eeec72dff4e2919a18713cd915f41ba967e2" exitCode=0 Mar 12 21:19:12.849192 master-0 kubenswrapper[31456]: I0312 21:19:12.849142 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" event={"ID":"3ecdea6c-95d9-4c09-aaa0-3979b74c2835","Type":"ContainerDied","Data":"a4fbdb91816584187eddb4ceb366eeec72dff4e2919a18713cd915f41ba967e2"} Mar 12 21:19:12.849310 master-0 kubenswrapper[31456]: I0312 21:19:12.849290 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" event={"ID":"3ecdea6c-95d9-4c09-aaa0-3979b74c2835","Type":"ContainerStarted","Data":"44029267d74ac584aaabb566c675fbded783fd1867fff954b1adaab7612caf5f"} Mar 12 21:19:14.879445 master-0 kubenswrapper[31456]: I0312 21:19:14.879335 31456 generic.go:334] "Generic (PLEG): container finished" podID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerID="0b88feb5d5acd0c31cc62735db986d33cd353a5ac5214a0a4096fede73221429" exitCode=0 Mar 12 21:19:14.879445 master-0 kubenswrapper[31456]: I0312 21:19:14.879422 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" event={"ID":"3ecdea6c-95d9-4c09-aaa0-3979b74c2835","Type":"ContainerDied","Data":"0b88feb5d5acd0c31cc62735db986d33cd353a5ac5214a0a4096fede73221429"} Mar 12 21:19:15.891649 master-0 kubenswrapper[31456]: I0312 21:19:15.891554 31456 generic.go:334] "Generic (PLEG): container finished" podID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerID="3199525df561304f241a3e8cf0585b3a1e813bf6de5ef3a6e150622cebaeef43" exitCode=0 Mar 12 21:19:15.891649 master-0 kubenswrapper[31456]: I0312 21:19:15.891626 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" event={"ID":"3ecdea6c-95d9-4c09-aaa0-3979b74c2835","Type":"ContainerDied","Data":"3199525df561304f241a3e8cf0585b3a1e813bf6de5ef3a6e150622cebaeef43"} Mar 12 21:19:17.280183 master-0 kubenswrapper[31456]: I0312 21:19:17.280115 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:17.455287 master-0 kubenswrapper[31456]: I0312 21:19:17.455202 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-bundle\") pod \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " Mar 12 21:19:17.455287 master-0 kubenswrapper[31456]: I0312 21:19:17.455284 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-util\") pod \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " Mar 12 21:19:17.455574 master-0 kubenswrapper[31456]: I0312 21:19:17.455379 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqshz\" (UniqueName: \"kubernetes.io/projected/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-kube-api-access-hqshz\") pod \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\" (UID: \"3ecdea6c-95d9-4c09-aaa0-3979b74c2835\") " Mar 12 21:19:17.456800 master-0 kubenswrapper[31456]: I0312 21:19:17.456637 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-bundle" (OuterVolumeSpecName: "bundle") pod "3ecdea6c-95d9-4c09-aaa0-3979b74c2835" (UID: "3ecdea6c-95d9-4c09-aaa0-3979b74c2835"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:17.460570 master-0 kubenswrapper[31456]: I0312 21:19:17.460500 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-kube-api-access-hqshz" (OuterVolumeSpecName: "kube-api-access-hqshz") pod "3ecdea6c-95d9-4c09-aaa0-3979b74c2835" (UID: "3ecdea6c-95d9-4c09-aaa0-3979b74c2835"). InnerVolumeSpecName "kube-api-access-hqshz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:19:17.468074 master-0 kubenswrapper[31456]: I0312 21:19:17.468031 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-util" (OuterVolumeSpecName: "util") pod "3ecdea6c-95d9-4c09-aaa0-3979b74c2835" (UID: "3ecdea6c-95d9-4c09-aaa0-3979b74c2835"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:17.560105 master-0 kubenswrapper[31456]: I0312 21:19:17.559937 31456 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:17.560105 master-0 kubenswrapper[31456]: I0312 21:19:17.560028 31456 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-util\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:17.560105 master-0 kubenswrapper[31456]: I0312 21:19:17.560046 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqshz\" (UniqueName: \"kubernetes.io/projected/3ecdea6c-95d9-4c09-aaa0-3979b74c2835-kube-api-access-hqshz\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:17.910095 master-0 kubenswrapper[31456]: I0312 21:19:17.909975 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" event={"ID":"3ecdea6c-95d9-4c09-aaa0-3979b74c2835","Type":"ContainerDied","Data":"44029267d74ac584aaabb566c675fbded783fd1867fff954b1adaab7612caf5f"} Mar 12 21:19:17.910095 master-0 kubenswrapper[31456]: I0312 21:19:17.910077 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44029267d74ac584aaabb566c675fbded783fd1867fff954b1adaab7612caf5f" Mar 12 21:19:17.910623 master-0 kubenswrapper[31456]: I0312 21:19:17.910150 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d447brg" Mar 12 21:19:25.086129 master-0 kubenswrapper[31456]: I0312 21:19:25.086058 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-786ffc8dc6-tzvrv"] Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: E0312 21:19:25.086389 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="pull" Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: I0312 21:19:25.086404 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="pull" Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: E0312 21:19:25.086421 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="extract" Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: I0312 21:19:25.086429 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="extract" Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: E0312 21:19:25.086441 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="util" Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: I0312 21:19:25.086449 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="util" Mar 12 21:19:25.086907 master-0 kubenswrapper[31456]: I0312 21:19:25.086601 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecdea6c-95d9-4c09-aaa0-3979b74c2835" containerName="extract" Mar 12 21:19:25.087212 master-0 kubenswrapper[31456]: I0312 21:19:25.087099 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.088926 master-0 kubenswrapper[31456]: I0312 21:19:25.088878 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 12 21:19:25.090263 master-0 kubenswrapper[31456]: I0312 21:19:25.090234 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 12 21:19:25.090461 master-0 kubenswrapper[31456]: I0312 21:19:25.090436 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 12 21:19:25.092581 master-0 kubenswrapper[31456]: I0312 21:19:25.092551 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 12 21:19:25.103000 master-0 kubenswrapper[31456]: I0312 21:19:25.102944 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 12 21:19:25.120891 master-0 kubenswrapper[31456]: I0312 21:19:25.120831 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-786ffc8dc6-tzvrv"] Mar 12 21:19:25.195950 master-0 kubenswrapper[31456]: I0312 21:19:25.195896 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-apiservice-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.196136 master-0 kubenswrapper[31456]: I0312 21:19:25.195972 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-webhook-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.196136 master-0 kubenswrapper[31456]: I0312 21:19:25.195992 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ppf6\" (UniqueName: \"kubernetes.io/projected/89d06489-0ab7-4e98-9e2d-e03072f41f5f-kube-api-access-9ppf6\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.196136 master-0 kubenswrapper[31456]: I0312 21:19:25.196029 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/89d06489-0ab7-4e98-9e2d-e03072f41f5f-socket-dir\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.196331 master-0 kubenswrapper[31456]: I0312 21:19:25.196281 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-metrics-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.298041 master-0 kubenswrapper[31456]: I0312 21:19:25.297977 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-metrics-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.298239 master-0 kubenswrapper[31456]: I0312 21:19:25.298065 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-apiservice-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.298464 master-0 kubenswrapper[31456]: I0312 21:19:25.298401 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-webhook-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.298504 master-0 kubenswrapper[31456]: I0312 21:19:25.298471 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ppf6\" (UniqueName: \"kubernetes.io/projected/89d06489-0ab7-4e98-9e2d-e03072f41f5f-kube-api-access-9ppf6\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.298560 master-0 kubenswrapper[31456]: I0312 21:19:25.298538 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/89d06489-0ab7-4e98-9e2d-e03072f41f5f-socket-dir\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.299560 master-0 kubenswrapper[31456]: I0312 21:19:25.299513 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/89d06489-0ab7-4e98-9e2d-e03072f41f5f-socket-dir\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.302128 master-0 kubenswrapper[31456]: I0312 21:19:25.302012 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-metrics-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.302717 master-0 kubenswrapper[31456]: I0312 21:19:25.302656 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-webhook-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.303793 master-0 kubenswrapper[31456]: I0312 21:19:25.303750 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89d06489-0ab7-4e98-9e2d-e03072f41f5f-apiservice-cert\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.317982 master-0 kubenswrapper[31456]: I0312 21:19:25.317923 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ppf6\" (UniqueName: \"kubernetes.io/projected/89d06489-0ab7-4e98-9e2d-e03072f41f5f-kube-api-access-9ppf6\") pod \"lvms-operator-786ffc8dc6-tzvrv\" (UID: \"89d06489-0ab7-4e98-9e2d-e03072f41f5f\") " pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.415641 master-0 kubenswrapper[31456]: I0312 21:19:25.415479 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:25.834885 master-0 kubenswrapper[31456]: I0312 21:19:25.834483 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-786ffc8dc6-tzvrv"] Mar 12 21:19:25.837237 master-0 kubenswrapper[31456]: W0312 21:19:25.837200 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89d06489_0ab7_4e98_9e2d_e03072f41f5f.slice/crio-2e764a2e0f2716463178c9824058feafee9a3925d66e51bbc6fe53dc8125449c WatchSource:0}: Error finding container 2e764a2e0f2716463178c9824058feafee9a3925d66e51bbc6fe53dc8125449c: Status 404 returned error can't find the container with id 2e764a2e0f2716463178c9824058feafee9a3925d66e51bbc6fe53dc8125449c Mar 12 21:19:25.993615 master-0 kubenswrapper[31456]: I0312 21:19:25.993573 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" event={"ID":"89d06489-0ab7-4e98-9e2d-e03072f41f5f","Type":"ContainerStarted","Data":"2e764a2e0f2716463178c9824058feafee9a3925d66e51bbc6fe53dc8125449c"} Mar 12 21:19:32.049656 master-0 kubenswrapper[31456]: I0312 21:19:32.049544 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" event={"ID":"89d06489-0ab7-4e98-9e2d-e03072f41f5f","Type":"ContainerStarted","Data":"79dc67081d094f4174839b0c2ea221253c1b079fc473ea95317f717b14cfe7c0"} Mar 12 21:19:32.051239 master-0 kubenswrapper[31456]: I0312 21:19:32.051193 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:32.055280 master-0 kubenswrapper[31456]: I0312 21:19:32.055227 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" Mar 12 21:19:32.097780 master-0 kubenswrapper[31456]: I0312 21:19:32.097597 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-786ffc8dc6-tzvrv" podStartSLOduration=1.5431655499999999 podStartE2EDuration="7.097042421s" podCreationTimestamp="2026-03-12 21:19:25 +0000 UTC" firstStartedPulling="2026-03-12 21:19:25.842490024 +0000 UTC m=+626.917095352" lastFinishedPulling="2026-03-12 21:19:31.396366895 +0000 UTC m=+632.470972223" observedRunningTime="2026-03-12 21:19:32.083709796 +0000 UTC m=+633.158315134" watchObservedRunningTime="2026-03-12 21:19:32.097042421 +0000 UTC m=+633.171647829" Mar 12 21:19:36.792200 master-0 kubenswrapper[31456]: I0312 21:19:36.792061 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf"] Mar 12 21:19:36.794449 master-0 kubenswrapper[31456]: I0312 21:19:36.794384 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:36.796600 master-0 kubenswrapper[31456]: I0312 21:19:36.796540 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vtgkl" Mar 12 21:19:36.810115 master-0 kubenswrapper[31456]: I0312 21:19:36.810035 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf"] Mar 12 21:19:36.912060 master-0 kubenswrapper[31456]: I0312 21:19:36.911917 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:36.912060 master-0 kubenswrapper[31456]: I0312 21:19:36.912023 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzljq\" (UniqueName: \"kubernetes.io/projected/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-kube-api-access-rzljq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:36.912462 master-0 kubenswrapper[31456]: I0312 21:19:36.912187 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.013891 master-0 kubenswrapper[31456]: I0312 21:19:37.013780 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.014179 master-0 kubenswrapper[31456]: I0312 21:19:37.014022 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.014179 master-0 kubenswrapper[31456]: I0312 21:19:37.014099 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzljq\" (UniqueName: \"kubernetes.io/projected/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-kube-api-access-rzljq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.014567 master-0 kubenswrapper[31456]: I0312 21:19:37.014510 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.014766 master-0 kubenswrapper[31456]: I0312 21:19:37.014717 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.036779 master-0 kubenswrapper[31456]: I0312 21:19:37.036715 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzljq\" (UniqueName: \"kubernetes.io/projected/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-kube-api-access-rzljq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.118850 master-0 kubenswrapper[31456]: I0312 21:19:37.118768 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:37.641345 master-0 kubenswrapper[31456]: I0312 21:19:37.641270 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf"] Mar 12 21:19:37.642404 master-0 kubenswrapper[31456]: W0312 21:19:37.642363 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9107c97_11ed_40d8_9c4b_31b58abd6ad3.slice/crio-3c44d42aef399df873554803e541afcb72b77caf9c4669b43a4649afa139f6f3 WatchSource:0}: Error finding container 3c44d42aef399df873554803e541afcb72b77caf9c4669b43a4649afa139f6f3: Status 404 returned error can't find the container with id 3c44d42aef399df873554803e541afcb72b77caf9c4669b43a4649afa139f6f3 Mar 12 21:19:37.778229 master-0 kubenswrapper[31456]: I0312 21:19:37.778151 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9"] Mar 12 21:19:37.780654 master-0 kubenswrapper[31456]: I0312 21:19:37.780585 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:37.791878 master-0 kubenswrapper[31456]: I0312 21:19:37.791613 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9"] Mar 12 21:19:37.936322 master-0 kubenswrapper[31456]: I0312 21:19:37.936274 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:37.936793 master-0 kubenswrapper[31456]: I0312 21:19:37.936774 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds4k\" (UniqueName: \"kubernetes.io/projected/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-kube-api-access-6ds4k\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:37.936951 master-0 kubenswrapper[31456]: I0312 21:19:37.936927 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.044957 master-0 kubenswrapper[31456]: I0312 21:19:38.038118 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.044957 master-0 kubenswrapper[31456]: I0312 21:19:38.038764 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.044957 master-0 kubenswrapper[31456]: I0312 21:19:38.038867 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ds4k\" (UniqueName: \"kubernetes.io/projected/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-kube-api-access-6ds4k\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.044957 master-0 kubenswrapper[31456]: I0312 21:19:38.039058 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.044957 master-0 kubenswrapper[31456]: I0312 21:19:38.039660 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.058563 master-0 kubenswrapper[31456]: I0312 21:19:38.058510 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ds4k\" (UniqueName: \"kubernetes.io/projected/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-kube-api-access-6ds4k\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.098038 master-0 kubenswrapper[31456]: I0312 21:19:38.097985 31456 generic.go:334] "Generic (PLEG): container finished" podID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerID="3b83582774abb5a6a32810e2499b1d269f96a35ac109f5be41e29d92dbdb94ae" exitCode=0 Mar 12 21:19:38.098038 master-0 kubenswrapper[31456]: I0312 21:19:38.098031 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" event={"ID":"b9107c97-11ed-40d8-9c4b-31b58abd6ad3","Type":"ContainerDied","Data":"3b83582774abb5a6a32810e2499b1d269f96a35ac109f5be41e29d92dbdb94ae"} Mar 12 21:19:38.098366 master-0 kubenswrapper[31456]: I0312 21:19:38.098057 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" event={"ID":"b9107c97-11ed-40d8-9c4b-31b58abd6ad3","Type":"ContainerStarted","Data":"3c44d42aef399df873554803e541afcb72b77caf9c4669b43a4649afa139f6f3"} Mar 12 21:19:38.109311 master-0 kubenswrapper[31456]: I0312 21:19:38.109204 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:38.577917 master-0 kubenswrapper[31456]: I0312 21:19:38.577840 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp"] Mar 12 21:19:38.579764 master-0 kubenswrapper[31456]: I0312 21:19:38.579722 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.599144 master-0 kubenswrapper[31456]: I0312 21:19:38.599098 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp"] Mar 12 21:19:38.641410 master-0 kubenswrapper[31456]: I0312 21:19:38.641358 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9"] Mar 12 21:19:38.652020 master-0 kubenswrapper[31456]: I0312 21:19:38.651933 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwz46\" (UniqueName: \"kubernetes.io/projected/3ceb4b7a-4cdd-42d1-acec-484006010f69-kube-api-access-pwz46\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.652020 master-0 kubenswrapper[31456]: I0312 21:19:38.651993 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.652100 master-0 kubenswrapper[31456]: I0312 21:19:38.652057 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.753626 master-0 kubenswrapper[31456]: I0312 21:19:38.753580 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.753765 master-0 kubenswrapper[31456]: I0312 21:19:38.753733 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwz46\" (UniqueName: \"kubernetes.io/projected/3ceb4b7a-4cdd-42d1-acec-484006010f69-kube-api-access-pwz46\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.753847 master-0 kubenswrapper[31456]: I0312 21:19:38.753783 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.754875 master-0 kubenswrapper[31456]: I0312 21:19:38.754528 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.754875 master-0 kubenswrapper[31456]: I0312 21:19:38.754544 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.771332 master-0 kubenswrapper[31456]: I0312 21:19:38.771274 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwz46\" (UniqueName: \"kubernetes.io/projected/3ceb4b7a-4cdd-42d1-acec-484006010f69-kube-api-access-pwz46\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:38.897931 master-0 kubenswrapper[31456]: I0312 21:19:38.897765 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:39.111174 master-0 kubenswrapper[31456]: I0312 21:19:39.111073 31456 generic.go:334] "Generic (PLEG): container finished" podID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerID="16fd319e4e63ea176cdb022a8e0f8123a996270d170fcc657ac05cf821cdd1b2" exitCode=0 Mar 12 21:19:39.111174 master-0 kubenswrapper[31456]: I0312 21:19:39.111139 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerDied","Data":"16fd319e4e63ea176cdb022a8e0f8123a996270d170fcc657ac05cf821cdd1b2"} Mar 12 21:19:39.111174 master-0 kubenswrapper[31456]: I0312 21:19:39.111174 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerStarted","Data":"1f8cda4f3d7b7a6dfdf430225194d9b2c482cba0511d8bd981b35230dd9a16c5"} Mar 12 21:19:39.441859 master-0 kubenswrapper[31456]: I0312 21:19:39.441719 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp"] Mar 12 21:19:39.456666 master-0 kubenswrapper[31456]: W0312 21:19:39.456307 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ceb4b7a_4cdd_42d1_acec_484006010f69.slice/crio-0497b5e9731fc5e20e8e1f32e0ee192b15ee7510f6548bc655b88912dde074e8 WatchSource:0}: Error finding container 0497b5e9731fc5e20e8e1f32e0ee192b15ee7510f6548bc655b88912dde074e8: Status 404 returned error can't find the container with id 0497b5e9731fc5e20e8e1f32e0ee192b15ee7510f6548bc655b88912dde074e8 Mar 12 21:19:40.123198 master-0 kubenswrapper[31456]: I0312 21:19:40.123108 31456 generic.go:334] "Generic (PLEG): container finished" podID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerID="770be2e7dd109fe82c3f630612c1101ad86dd1e58a60dc23a26cc3e6ef4bf8e9" exitCode=0 Mar 12 21:19:40.124050 master-0 kubenswrapper[31456]: I0312 21:19:40.123196 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerDied","Data":"770be2e7dd109fe82c3f630612c1101ad86dd1e58a60dc23a26cc3e6ef4bf8e9"} Mar 12 21:19:40.124050 master-0 kubenswrapper[31456]: I0312 21:19:40.123246 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerStarted","Data":"0497b5e9731fc5e20e8e1f32e0ee192b15ee7510f6548bc655b88912dde074e8"} Mar 12 21:19:42.162983 master-0 kubenswrapper[31456]: I0312 21:19:42.162906 31456 generic.go:334] "Generic (PLEG): container finished" podID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerID="2f11d9cda9bd4c4cb1096c05c9c7ded5b82f93c0f2022bc60fe447407485d4cf" exitCode=0 Mar 12 21:19:42.163613 master-0 kubenswrapper[31456]: I0312 21:19:42.163045 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" event={"ID":"b9107c97-11ed-40d8-9c4b-31b58abd6ad3","Type":"ContainerDied","Data":"2f11d9cda9bd4c4cb1096c05c9c7ded5b82f93c0f2022bc60fe447407485d4cf"} Mar 12 21:19:42.167137 master-0 kubenswrapper[31456]: I0312 21:19:42.166947 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerStarted","Data":"a61e12ebaed8f44c595825afdce116f266491ac3192201ffd70abe4ca5713703"} Mar 12 21:19:42.173177 master-0 kubenswrapper[31456]: I0312 21:19:42.173135 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerStarted","Data":"7795ba004061fb3fa9c7ee120c6db7769b597e0fd74b577cee8e2926c6c2e3c7"} Mar 12 21:19:43.185164 master-0 kubenswrapper[31456]: I0312 21:19:43.185062 31456 generic.go:334] "Generic (PLEG): container finished" podID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerID="7795ba004061fb3fa9c7ee120c6db7769b597e0fd74b577cee8e2926c6c2e3c7" exitCode=0 Mar 12 21:19:43.185722 master-0 kubenswrapper[31456]: I0312 21:19:43.185111 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerDied","Data":"7795ba004061fb3fa9c7ee120c6db7769b597e0fd74b577cee8e2926c6c2e3c7"} Mar 12 21:19:43.188448 master-0 kubenswrapper[31456]: I0312 21:19:43.188411 31456 generic.go:334] "Generic (PLEG): container finished" podID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerID="5234ae0be5b555a24f7e7f0f167ea47cb2d03b58a09e0e917f6a5d744773366f" exitCode=0 Mar 12 21:19:43.188524 master-0 kubenswrapper[31456]: I0312 21:19:43.188472 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" event={"ID":"b9107c97-11ed-40d8-9c4b-31b58abd6ad3","Type":"ContainerDied","Data":"5234ae0be5b555a24f7e7f0f167ea47cb2d03b58a09e0e917f6a5d744773366f"} Mar 12 21:19:43.192123 master-0 kubenswrapper[31456]: I0312 21:19:43.192083 31456 generic.go:334] "Generic (PLEG): container finished" podID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerID="a61e12ebaed8f44c595825afdce116f266491ac3192201ffd70abe4ca5713703" exitCode=0 Mar 12 21:19:43.192192 master-0 kubenswrapper[31456]: I0312 21:19:43.192132 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerDied","Data":"a61e12ebaed8f44c595825afdce116f266491ac3192201ffd70abe4ca5713703"} Mar 12 21:19:44.206773 master-0 kubenswrapper[31456]: I0312 21:19:44.206604 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerStarted","Data":"b052217879481ba5bfac2a24a8075967adb57071501268ff2c8462adb1fe82f8"} Mar 12 21:19:44.211420 master-0 kubenswrapper[31456]: I0312 21:19:44.211363 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerStarted","Data":"9a6c9df6b0fccfc85c66893898afc50d151938602f0e6f93349acf3a06954228"} Mar 12 21:19:44.342649 master-0 kubenswrapper[31456]: I0312 21:19:44.342173 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" podStartSLOduration=4.540078014 podStartE2EDuration="7.342147852s" podCreationTimestamp="2026-03-12 21:19:37 +0000 UTC" firstStartedPulling="2026-03-12 21:19:39.11323447 +0000 UTC m=+640.187839798" lastFinishedPulling="2026-03-12 21:19:41.915304298 +0000 UTC m=+642.989909636" observedRunningTime="2026-03-12 21:19:44.340174453 +0000 UTC m=+645.414779791" watchObservedRunningTime="2026-03-12 21:19:44.342147852 +0000 UTC m=+645.416753190" Mar 12 21:19:44.466167 master-0 kubenswrapper[31456]: I0312 21:19:44.464373 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" podStartSLOduration=4.67277094 podStartE2EDuration="6.464350853s" podCreationTimestamp="2026-03-12 21:19:38 +0000 UTC" firstStartedPulling="2026-03-12 21:19:40.124629887 +0000 UTC m=+641.199235215" lastFinishedPulling="2026-03-12 21:19:41.91620979 +0000 UTC m=+642.990815128" observedRunningTime="2026-03-12 21:19:44.457988618 +0000 UTC m=+645.532593946" watchObservedRunningTime="2026-03-12 21:19:44.464350853 +0000 UTC m=+645.538956171" Mar 12 21:19:44.685297 master-0 kubenswrapper[31456]: I0312 21:19:44.685217 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:44.852589 master-0 kubenswrapper[31456]: I0312 21:19:44.852427 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-util\") pod \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " Mar 12 21:19:44.852589 master-0 kubenswrapper[31456]: I0312 21:19:44.852580 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-bundle\") pod \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " Mar 12 21:19:44.853328 master-0 kubenswrapper[31456]: I0312 21:19:44.853293 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-bundle" (OuterVolumeSpecName: "bundle") pod "b9107c97-11ed-40d8-9c4b-31b58abd6ad3" (UID: "b9107c97-11ed-40d8-9c4b-31b58abd6ad3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:44.853497 master-0 kubenswrapper[31456]: I0312 21:19:44.853358 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzljq\" (UniqueName: \"kubernetes.io/projected/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-kube-api-access-rzljq\") pod \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\" (UID: \"b9107c97-11ed-40d8-9c4b-31b58abd6ad3\") " Mar 12 21:19:44.854004 master-0 kubenswrapper[31456]: I0312 21:19:44.853967 31456 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:44.858868 master-0 kubenswrapper[31456]: I0312 21:19:44.856170 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-kube-api-access-rzljq" (OuterVolumeSpecName: "kube-api-access-rzljq") pod "b9107c97-11ed-40d8-9c4b-31b58abd6ad3" (UID: "b9107c97-11ed-40d8-9c4b-31b58abd6ad3"). InnerVolumeSpecName "kube-api-access-rzljq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:19:44.860184 master-0 kubenswrapper[31456]: I0312 21:19:44.860147 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-util" (OuterVolumeSpecName: "util") pod "b9107c97-11ed-40d8-9c4b-31b58abd6ad3" (UID: "b9107c97-11ed-40d8-9c4b-31b58abd6ad3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:44.954613 master-0 kubenswrapper[31456]: I0312 21:19:44.954550 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzljq\" (UniqueName: \"kubernetes.io/projected/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-kube-api-access-rzljq\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:44.954613 master-0 kubenswrapper[31456]: I0312 21:19:44.954596 31456 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b9107c97-11ed-40d8-9c4b-31b58abd6ad3-util\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:45.223351 master-0 kubenswrapper[31456]: I0312 21:19:45.223297 31456 generic.go:334] "Generic (PLEG): container finished" podID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerID="b052217879481ba5bfac2a24a8075967adb57071501268ff2c8462adb1fe82f8" exitCode=0 Mar 12 21:19:45.223351 master-0 kubenswrapper[31456]: I0312 21:19:45.223368 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerDied","Data":"b052217879481ba5bfac2a24a8075967adb57071501268ff2c8462adb1fe82f8"} Mar 12 21:19:45.225999 master-0 kubenswrapper[31456]: I0312 21:19:45.225930 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" event={"ID":"b9107c97-11ed-40d8-9c4b-31b58abd6ad3","Type":"ContainerDied","Data":"3c44d42aef399df873554803e541afcb72b77caf9c4669b43a4649afa139f6f3"} Mar 12 21:19:45.225999 master-0 kubenswrapper[31456]: I0312 21:19:45.225969 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c44d42aef399df873554803e541afcb72b77caf9c4669b43a4649afa139f6f3" Mar 12 21:19:45.226286 master-0 kubenswrapper[31456]: I0312 21:19:45.226219 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5h5gsf" Mar 12 21:19:45.230984 master-0 kubenswrapper[31456]: I0312 21:19:45.230922 31456 generic.go:334] "Generic (PLEG): container finished" podID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerID="9a6c9df6b0fccfc85c66893898afc50d151938602f0e6f93349acf3a06954228" exitCode=0 Mar 12 21:19:45.231139 master-0 kubenswrapper[31456]: I0312 21:19:45.230978 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerDied","Data":"9a6c9df6b0fccfc85c66893898afc50d151938602f0e6f93349acf3a06954228"} Mar 12 21:19:46.415509 master-0 kubenswrapper[31456]: I0312 21:19:46.415325 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7"] Mar 12 21:19:46.417171 master-0 kubenswrapper[31456]: E0312 21:19:46.417111 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="extract" Mar 12 21:19:46.417171 master-0 kubenswrapper[31456]: I0312 21:19:46.417172 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="extract" Mar 12 21:19:46.417315 master-0 kubenswrapper[31456]: E0312 21:19:46.417228 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="pull" Mar 12 21:19:46.417315 master-0 kubenswrapper[31456]: I0312 21:19:46.417242 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="pull" Mar 12 21:19:46.417315 master-0 kubenswrapper[31456]: E0312 21:19:46.417298 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="util" Mar 12 21:19:46.417315 master-0 kubenswrapper[31456]: I0312 21:19:46.417315 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="util" Mar 12 21:19:46.418257 master-0 kubenswrapper[31456]: I0312 21:19:46.418227 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9107c97-11ed-40d8-9c4b-31b58abd6ad3" containerName="extract" Mar 12 21:19:46.424583 master-0 kubenswrapper[31456]: I0312 21:19:46.423459 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.450839 master-0 kubenswrapper[31456]: I0312 21:19:46.450767 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7"] Mar 12 21:19:46.587523 master-0 kubenswrapper[31456]: I0312 21:19:46.587310 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.588032 master-0 kubenswrapper[31456]: I0312 21:19:46.587696 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhh6\" (UniqueName: \"kubernetes.io/projected/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-kube-api-access-mmhh6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.588032 master-0 kubenswrapper[31456]: I0312 21:19:46.587981 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.689046 master-0 kubenswrapper[31456]: I0312 21:19:46.689005 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.689279 master-0 kubenswrapper[31456]: I0312 21:19:46.689261 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmhh6\" (UniqueName: \"kubernetes.io/projected/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-kube-api-access-mmhh6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.689407 master-0 kubenswrapper[31456]: I0312 21:19:46.689392 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.689495 master-0 kubenswrapper[31456]: I0312 21:19:46.689473 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.689872 master-0 kubenswrapper[31456]: I0312 21:19:46.689841 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.709844 master-0 kubenswrapper[31456]: I0312 21:19:46.709787 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmhh6\" (UniqueName: \"kubernetes.io/projected/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-kube-api-access-mmhh6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.752397 master-0 kubenswrapper[31456]: I0312 21:19:46.752315 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:46.760267 master-0 kubenswrapper[31456]: I0312 21:19:46.759996 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:46.760713 master-0 kubenswrapper[31456]: I0312 21:19:46.760661 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:46.892278 master-0 kubenswrapper[31456]: I0312 21:19:46.892231 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ds4k\" (UniqueName: \"kubernetes.io/projected/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-kube-api-access-6ds4k\") pod \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " Mar 12 21:19:46.892603 master-0 kubenswrapper[31456]: I0312 21:19:46.892581 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-bundle\") pod \"3ceb4b7a-4cdd-42d1-acec-484006010f69\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " Mar 12 21:19:46.892842 master-0 kubenswrapper[31456]: I0312 21:19:46.892800 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-bundle\") pod \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " Mar 12 21:19:46.892995 master-0 kubenswrapper[31456]: I0312 21:19:46.892977 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-util\") pod \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\" (UID: \"b67658b3-22fd-49a7-a2c1-18b3206a7cbe\") " Mar 12 21:19:46.893124 master-0 kubenswrapper[31456]: I0312 21:19:46.893104 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwz46\" (UniqueName: \"kubernetes.io/projected/3ceb4b7a-4cdd-42d1-acec-484006010f69-kube-api-access-pwz46\") pod \"3ceb4b7a-4cdd-42d1-acec-484006010f69\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " Mar 12 21:19:46.893244 master-0 kubenswrapper[31456]: I0312 21:19:46.893227 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-util\") pod \"3ceb4b7a-4cdd-42d1-acec-484006010f69\" (UID: \"3ceb4b7a-4cdd-42d1-acec-484006010f69\") " Mar 12 21:19:46.893660 master-0 kubenswrapper[31456]: I0312 21:19:46.893286 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-bundle" (OuterVolumeSpecName: "bundle") pod "3ceb4b7a-4cdd-42d1-acec-484006010f69" (UID: "3ceb4b7a-4cdd-42d1-acec-484006010f69"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:46.895657 master-0 kubenswrapper[31456]: I0312 21:19:46.895603 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-bundle" (OuterVolumeSpecName: "bundle") pod "b67658b3-22fd-49a7-a2c1-18b3206a7cbe" (UID: "b67658b3-22fd-49a7-a2c1-18b3206a7cbe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:46.900252 master-0 kubenswrapper[31456]: I0312 21:19:46.900182 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ceb4b7a-4cdd-42d1-acec-484006010f69-kube-api-access-pwz46" (OuterVolumeSpecName: "kube-api-access-pwz46") pod "3ceb4b7a-4cdd-42d1-acec-484006010f69" (UID: "3ceb4b7a-4cdd-42d1-acec-484006010f69"). InnerVolumeSpecName "kube-api-access-pwz46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:19:46.901926 master-0 kubenswrapper[31456]: I0312 21:19:46.901844 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-kube-api-access-6ds4k" (OuterVolumeSpecName: "kube-api-access-6ds4k") pod "b67658b3-22fd-49a7-a2c1-18b3206a7cbe" (UID: "b67658b3-22fd-49a7-a2c1-18b3206a7cbe"). InnerVolumeSpecName "kube-api-access-6ds4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:19:46.908007 master-0 kubenswrapper[31456]: I0312 21:19:46.907163 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-util" (OuterVolumeSpecName: "util") pod "3ceb4b7a-4cdd-42d1-acec-484006010f69" (UID: "3ceb4b7a-4cdd-42d1-acec-484006010f69"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:46.911430 master-0 kubenswrapper[31456]: I0312 21:19:46.911343 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-util" (OuterVolumeSpecName: "util") pod "b67658b3-22fd-49a7-a2c1-18b3206a7cbe" (UID: "b67658b3-22fd-49a7-a2c1-18b3206a7cbe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:46.995522 master-0 kubenswrapper[31456]: I0312 21:19:46.995415 31456 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:46.995522 master-0 kubenswrapper[31456]: I0312 21:19:46.995487 31456 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-util\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:46.995522 master-0 kubenswrapper[31456]: I0312 21:19:46.995499 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwz46\" (UniqueName: \"kubernetes.io/projected/3ceb4b7a-4cdd-42d1-acec-484006010f69-kube-api-access-pwz46\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:46.995522 master-0 kubenswrapper[31456]: I0312 21:19:46.995512 31456 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-util\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:46.995522 master-0 kubenswrapper[31456]: I0312 21:19:46.995522 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ds4k\" (UniqueName: \"kubernetes.io/projected/b67658b3-22fd-49a7-a2c1-18b3206a7cbe-kube-api-access-6ds4k\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:46.995522 master-0 kubenswrapper[31456]: I0312 21:19:46.995531 31456 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ceb4b7a-4cdd-42d1-acec-484006010f69-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:47.212632 master-0 kubenswrapper[31456]: I0312 21:19:47.212552 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7"] Mar 12 21:19:47.222145 master-0 kubenswrapper[31456]: W0312 21:19:47.222066 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc802f590_bb97_4ebb_a5b0_8fcacaecc2e5.slice/crio-73488d27fbe6d63031a1fb71075890d0de70df725b8d65d2792cc9f19f0e0e64 WatchSource:0}: Error finding container 73488d27fbe6d63031a1fb71075890d0de70df725b8d65d2792cc9f19f0e0e64: Status 404 returned error can't find the container with id 73488d27fbe6d63031a1fb71075890d0de70df725b8d65d2792cc9f19f0e0e64 Mar 12 21:19:47.260764 master-0 kubenswrapper[31456]: I0312 21:19:47.260301 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" event={"ID":"b67658b3-22fd-49a7-a2c1-18b3206a7cbe","Type":"ContainerDied","Data":"1f8cda4f3d7b7a6dfdf430225194d9b2c482cba0511d8bd981b35230dd9a16c5"} Mar 12 21:19:47.260764 master-0 kubenswrapper[31456]: I0312 21:19:47.260354 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8cda4f3d7b7a6dfdf430225194d9b2c482cba0511d8bd981b35230dd9a16c5" Mar 12 21:19:47.260764 master-0 kubenswrapper[31456]: I0312 21:19:47.260414 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16l9v9" Mar 12 21:19:47.262440 master-0 kubenswrapper[31456]: I0312 21:19:47.262376 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" event={"ID":"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5","Type":"ContainerStarted","Data":"73488d27fbe6d63031a1fb71075890d0de70df725b8d65d2792cc9f19f0e0e64"} Mar 12 21:19:47.265387 master-0 kubenswrapper[31456]: I0312 21:19:47.265346 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" event={"ID":"3ceb4b7a-4cdd-42d1-acec-484006010f69","Type":"ContainerDied","Data":"0497b5e9731fc5e20e8e1f32e0ee192b15ee7510f6548bc655b88912dde074e8"} Mar 12 21:19:47.265387 master-0 kubenswrapper[31456]: I0312 21:19:47.265371 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0497b5e9731fc5e20e8e1f32e0ee192b15ee7510f6548bc655b88912dde074e8" Mar 12 21:19:47.265654 master-0 kubenswrapper[31456]: I0312 21:19:47.265418 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874nt6vp" Mar 12 21:19:48.272515 master-0 kubenswrapper[31456]: I0312 21:19:48.272460 31456 generic.go:334] "Generic (PLEG): container finished" podID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerID="7590c633d3cb099996a6d7841c580094ab640d72d03cefce900b094e55d36384" exitCode=0 Mar 12 21:19:48.272515 master-0 kubenswrapper[31456]: I0312 21:19:48.272508 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" event={"ID":"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5","Type":"ContainerDied","Data":"7590c633d3cb099996a6d7841c580094ab640d72d03cefce900b094e55d36384"} Mar 12 21:19:50.302718 master-0 kubenswrapper[31456]: I0312 21:19:50.302618 31456 generic.go:334] "Generic (PLEG): container finished" podID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerID="e1a17f568c216e395765b606c757769ba087c64364193c8b368cec2d7b04c8cf" exitCode=0 Mar 12 21:19:50.303708 master-0 kubenswrapper[31456]: I0312 21:19:50.302692 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" event={"ID":"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5","Type":"ContainerDied","Data":"e1a17f568c216e395765b606c757769ba087c64364193c8b368cec2d7b04c8cf"} Mar 12 21:19:51.318527 master-0 kubenswrapper[31456]: I0312 21:19:51.318469 31456 generic.go:334] "Generic (PLEG): container finished" podID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerID="76ccba5fb140913986e1548d161f0938f534485bd6384b23356ba051a13b082b" exitCode=0 Mar 12 21:19:51.318527 master-0 kubenswrapper[31456]: I0312 21:19:51.318523 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" event={"ID":"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5","Type":"ContainerDied","Data":"76ccba5fb140913986e1548d161f0938f534485bd6384b23356ba051a13b082b"} Mar 12 21:19:52.056689 master-0 kubenswrapper[31456]: I0312 21:19:52.056606 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk"] Mar 12 21:19:52.056997 master-0 kubenswrapper[31456]: E0312 21:19:52.056964 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="extract" Mar 12 21:19:52.056997 master-0 kubenswrapper[31456]: I0312 21:19:52.056980 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="extract" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: E0312 21:19:52.057010 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="pull" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: I0312 21:19:52.057019 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="pull" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: E0312 21:19:52.057033 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="pull" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: I0312 21:19:52.057041 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="pull" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: E0312 21:19:52.057053 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="util" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: I0312 21:19:52.057061 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="util" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: E0312 21:19:52.057082 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="extract" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: I0312 21:19:52.057090 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="extract" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: E0312 21:19:52.057101 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="util" Mar 12 21:19:52.057107 master-0 kubenswrapper[31456]: I0312 21:19:52.057108 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="util" Mar 12 21:19:52.057506 master-0 kubenswrapper[31456]: I0312 21:19:52.057287 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67658b3-22fd-49a7-a2c1-18b3206a7cbe" containerName="extract" Mar 12 21:19:52.057506 master-0 kubenswrapper[31456]: I0312 21:19:52.057322 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceb4b7a-4cdd-42d1-acec-484006010f69" containerName="extract" Mar 12 21:19:52.057915 master-0 kubenswrapper[31456]: I0312 21:19:52.057880 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.060343 master-0 kubenswrapper[31456]: I0312 21:19:52.060279 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 12 21:19:52.061366 master-0 kubenswrapper[31456]: I0312 21:19:52.061324 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 12 21:19:52.102375 master-0 kubenswrapper[31456]: I0312 21:19:52.102318 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk"] Mar 12 21:19:52.186700 master-0 kubenswrapper[31456]: I0312 21:19:52.186581 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/589a88d1-7c6d-4fd1-bbe0-39b2d1830238-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g7qhk\" (UID: \"589a88d1-7c6d-4fd1-bbe0-39b2d1830238\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.186935 master-0 kubenswrapper[31456]: I0312 21:19:52.186779 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blw8c\" (UniqueName: \"kubernetes.io/projected/589a88d1-7c6d-4fd1-bbe0-39b2d1830238-kube-api-access-blw8c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g7qhk\" (UID: \"589a88d1-7c6d-4fd1-bbe0-39b2d1830238\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.288171 master-0 kubenswrapper[31456]: I0312 21:19:52.288093 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blw8c\" (UniqueName: \"kubernetes.io/projected/589a88d1-7c6d-4fd1-bbe0-39b2d1830238-kube-api-access-blw8c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g7qhk\" (UID: \"589a88d1-7c6d-4fd1-bbe0-39b2d1830238\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.288419 master-0 kubenswrapper[31456]: I0312 21:19:52.288358 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/589a88d1-7c6d-4fd1-bbe0-39b2d1830238-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g7qhk\" (UID: \"589a88d1-7c6d-4fd1-bbe0-39b2d1830238\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.288979 master-0 kubenswrapper[31456]: I0312 21:19:52.288943 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/589a88d1-7c6d-4fd1-bbe0-39b2d1830238-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g7qhk\" (UID: \"589a88d1-7c6d-4fd1-bbe0-39b2d1830238\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.319422 master-0 kubenswrapper[31456]: I0312 21:19:52.319349 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blw8c\" (UniqueName: \"kubernetes.io/projected/589a88d1-7c6d-4fd1-bbe0-39b2d1830238-kube-api-access-blw8c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g7qhk\" (UID: \"589a88d1-7c6d-4fd1-bbe0-39b2d1830238\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.373993 master-0 kubenswrapper[31456]: I0312 21:19:52.373835 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" Mar 12 21:19:52.679831 master-0 kubenswrapper[31456]: I0312 21:19:52.679031 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:52.797829 master-0 kubenswrapper[31456]: I0312 21:19:52.797323 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-util\") pod \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " Mar 12 21:19:52.797829 master-0 kubenswrapper[31456]: I0312 21:19:52.797473 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-bundle\") pod \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " Mar 12 21:19:52.797829 master-0 kubenswrapper[31456]: I0312 21:19:52.797544 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmhh6\" (UniqueName: \"kubernetes.io/projected/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-kube-api-access-mmhh6\") pod \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\" (UID: \"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5\") " Mar 12 21:19:52.806782 master-0 kubenswrapper[31456]: I0312 21:19:52.799972 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-bundle" (OuterVolumeSpecName: "bundle") pod "c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" (UID: "c802f590-bb97-4ebb-a5b0-8fcacaecc2e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:52.807010 master-0 kubenswrapper[31456]: I0312 21:19:52.806793 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-kube-api-access-mmhh6" (OuterVolumeSpecName: "kube-api-access-mmhh6") pod "c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" (UID: "c802f590-bb97-4ebb-a5b0-8fcacaecc2e5"). InnerVolumeSpecName "kube-api-access-mmhh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:19:52.841833 master-0 kubenswrapper[31456]: I0312 21:19:52.827608 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-util" (OuterVolumeSpecName: "util") pod "c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" (UID: "c802f590-bb97-4ebb-a5b0-8fcacaecc2e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:19:52.857064 master-0 kubenswrapper[31456]: I0312 21:19:52.855172 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk"] Mar 12 21:19:52.902452 master-0 kubenswrapper[31456]: I0312 21:19:52.901936 31456 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-util\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:52.902452 master-0 kubenswrapper[31456]: I0312 21:19:52.901981 31456 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:52.902452 master-0 kubenswrapper[31456]: I0312 21:19:52.901998 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmhh6\" (UniqueName: \"kubernetes.io/projected/c802f590-bb97-4ebb-a5b0-8fcacaecc2e5-kube-api-access-mmhh6\") on node \"master-0\" DevicePath \"\"" Mar 12 21:19:53.338349 master-0 kubenswrapper[31456]: I0312 21:19:53.338282 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" event={"ID":"589a88d1-7c6d-4fd1-bbe0-39b2d1830238","Type":"ContainerStarted","Data":"5b7b406541376a3b7e1a55470e6d075e2c13f69377230c07d4071c9b3d21b19b"} Mar 12 21:19:53.340976 master-0 kubenswrapper[31456]: I0312 21:19:53.340921 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" event={"ID":"c802f590-bb97-4ebb-a5b0-8fcacaecc2e5","Type":"ContainerDied","Data":"73488d27fbe6d63031a1fb71075890d0de70df725b8d65d2792cc9f19f0e0e64"} Mar 12 21:19:53.341068 master-0 kubenswrapper[31456]: I0312 21:19:53.340978 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73488d27fbe6d63031a1fb71075890d0de70df725b8d65d2792cc9f19f0e0e64" Mar 12 21:19:53.341068 master-0 kubenswrapper[31456]: I0312 21:19:53.341032 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08ljpp7" Mar 12 21:19:57.434945 master-0 kubenswrapper[31456]: I0312 21:19:57.434126 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" event={"ID":"589a88d1-7c6d-4fd1-bbe0-39b2d1830238","Type":"ContainerStarted","Data":"48f08797617ca280c577b87f28105c4c38fdc0c9f9807fee572002692d8bac9f"} Mar 12 21:19:57.469220 master-0 kubenswrapper[31456]: I0312 21:19:57.467920 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g7qhk" podStartSLOduration=1.892890352 podStartE2EDuration="5.46790126s" podCreationTimestamp="2026-03-12 21:19:52 +0000 UTC" firstStartedPulling="2026-03-12 21:19:52.901094844 +0000 UTC m=+653.975700172" lastFinishedPulling="2026-03-12 21:19:56.476105752 +0000 UTC m=+657.550711080" observedRunningTime="2026-03-12 21:19:57.46544338 +0000 UTC m=+658.540048708" watchObservedRunningTime="2026-03-12 21:19:57.46790126 +0000 UTC m=+658.542506578" Mar 12 21:20:01.279782 master-0 kubenswrapper[31456]: I0312 21:20:01.279712 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-w59gc"] Mar 12 21:20:01.280310 master-0 kubenswrapper[31456]: E0312 21:20:01.280084 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="extract" Mar 12 21:20:01.280310 master-0 kubenswrapper[31456]: I0312 21:20:01.280100 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="extract" Mar 12 21:20:01.280310 master-0 kubenswrapper[31456]: E0312 21:20:01.280121 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="pull" Mar 12 21:20:01.280310 master-0 kubenswrapper[31456]: I0312 21:20:01.280130 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="pull" Mar 12 21:20:01.280310 master-0 kubenswrapper[31456]: E0312 21:20:01.280147 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="util" Mar 12 21:20:01.280310 master-0 kubenswrapper[31456]: I0312 21:20:01.280166 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="util" Mar 12 21:20:01.280492 master-0 kubenswrapper[31456]: I0312 21:20:01.280382 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c802f590-bb97-4ebb-a5b0-8fcacaecc2e5" containerName="extract" Mar 12 21:20:01.293493 master-0 kubenswrapper[31456]: I0312 21:20:01.293449 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-w59gc"] Mar 12 21:20:01.293702 master-0 kubenswrapper[31456]: I0312 21:20:01.293549 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.296011 master-0 kubenswrapper[31456]: I0312 21:20:01.295975 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 21:20:01.310136 master-0 kubenswrapper[31456]: I0312 21:20:01.301553 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 21:20:01.344723 master-0 kubenswrapper[31456]: I0312 21:20:01.344643 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ae89e71-9f7b-48b6-8e21-5e7f46739ce0-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-w59gc\" (UID: \"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0\") " pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.344947 master-0 kubenswrapper[31456]: I0312 21:20:01.344850 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz4rn\" (UniqueName: \"kubernetes.io/projected/4ae89e71-9f7b-48b6-8e21-5e7f46739ce0-kube-api-access-jz4rn\") pod \"cert-manager-webhook-6888856db4-w59gc\" (UID: \"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0\") " pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.445847 master-0 kubenswrapper[31456]: I0312 21:20:01.445782 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ae89e71-9f7b-48b6-8e21-5e7f46739ce0-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-w59gc\" (UID: \"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0\") " pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.446069 master-0 kubenswrapper[31456]: I0312 21:20:01.445907 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz4rn\" (UniqueName: \"kubernetes.io/projected/4ae89e71-9f7b-48b6-8e21-5e7f46739ce0-kube-api-access-jz4rn\") pod \"cert-manager-webhook-6888856db4-w59gc\" (UID: \"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0\") " pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.467701 master-0 kubenswrapper[31456]: I0312 21:20:01.467654 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ae89e71-9f7b-48b6-8e21-5e7f46739ce0-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-w59gc\" (UID: \"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0\") " pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.468401 master-0 kubenswrapper[31456]: I0312 21:20:01.468365 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz4rn\" (UniqueName: \"kubernetes.io/projected/4ae89e71-9f7b-48b6-8e21-5e7f46739ce0-kube-api-access-jz4rn\") pod \"cert-manager-webhook-6888856db4-w59gc\" (UID: \"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0\") " pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:01.626721 master-0 kubenswrapper[31456]: I0312 21:20:01.626666 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:02.155152 master-0 kubenswrapper[31456]: I0312 21:20:02.155084 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-w59gc"] Mar 12 21:20:02.166584 master-0 kubenswrapper[31456]: W0312 21:20:02.166521 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ae89e71_9f7b_48b6_8e21_5e7f46739ce0.slice/crio-00acf4f0cdea340b400e8d1cb4c1218a29349489688928ea4eba5d55ec0922e3 WatchSource:0}: Error finding container 00acf4f0cdea340b400e8d1cb4c1218a29349489688928ea4eba5d55ec0922e3: Status 404 returned error can't find the container with id 00acf4f0cdea340b400e8d1cb4c1218a29349489688928ea4eba5d55ec0922e3 Mar 12 21:20:02.472144 master-0 kubenswrapper[31456]: I0312 21:20:02.472035 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" event={"ID":"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0","Type":"ContainerStarted","Data":"00acf4f0cdea340b400e8d1cb4c1218a29349489688928ea4eba5d55ec0922e3"} Mar 12 21:20:03.810402 master-0 kubenswrapper[31456]: I0312 21:20:03.810328 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7"] Mar 12 21:20:03.811247 master-0 kubenswrapper[31456]: I0312 21:20:03.811221 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" Mar 12 21:20:03.814688 master-0 kubenswrapper[31456]: I0312 21:20:03.814634 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 21:20:03.828438 master-0 kubenswrapper[31456]: I0312 21:20:03.828374 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 21:20:03.860042 master-0 kubenswrapper[31456]: I0312 21:20:03.859972 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7"] Mar 12 21:20:03.989856 master-0 kubenswrapper[31456]: I0312 21:20:03.989593 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfh2\" (UniqueName: \"kubernetes.io/projected/9b092033-2beb-4fb5-a77d-6d962a0aaa4f-kube-api-access-nkfh2\") pod \"nmstate-operator-796d4cfff4-k5lj7\" (UID: \"9b092033-2beb-4fb5-a77d-6d962a0aaa4f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" Mar 12 21:20:04.028881 master-0 kubenswrapper[31456]: I0312 21:20:04.027768 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-w7gf4"] Mar 12 21:20:04.028881 master-0 kubenswrapper[31456]: I0312 21:20:04.028603 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.063514 master-0 kubenswrapper[31456]: I0312 21:20:04.063330 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-w7gf4"] Mar 12 21:20:04.090852 master-0 kubenswrapper[31456]: I0312 21:20:04.090742 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfh2\" (UniqueName: \"kubernetes.io/projected/9b092033-2beb-4fb5-a77d-6d962a0aaa4f-kube-api-access-nkfh2\") pod \"nmstate-operator-796d4cfff4-k5lj7\" (UID: \"9b092033-2beb-4fb5-a77d-6d962a0aaa4f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" Mar 12 21:20:04.116429 master-0 kubenswrapper[31456]: I0312 21:20:04.116389 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkfh2\" (UniqueName: \"kubernetes.io/projected/9b092033-2beb-4fb5-a77d-6d962a0aaa4f-kube-api-access-nkfh2\") pod \"nmstate-operator-796d4cfff4-k5lj7\" (UID: \"9b092033-2beb-4fb5-a77d-6d962a0aaa4f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" Mar 12 21:20:04.140327 master-0 kubenswrapper[31456]: I0312 21:20:04.140279 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" Mar 12 21:20:04.194833 master-0 kubenswrapper[31456]: I0312 21:20:04.191790 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef503d91-7432-4131-a9bd-c888c85aa76e-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-w7gf4\" (UID: \"ef503d91-7432-4131-a9bd-c888c85aa76e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.194833 master-0 kubenswrapper[31456]: I0312 21:20:04.191881 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttlsd\" (UniqueName: \"kubernetes.io/projected/ef503d91-7432-4131-a9bd-c888c85aa76e-kube-api-access-ttlsd\") pod \"cert-manager-cainjector-5545bd876-w7gf4\" (UID: \"ef503d91-7432-4131-a9bd-c888c85aa76e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.293151 master-0 kubenswrapper[31456]: I0312 21:20:04.292794 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef503d91-7432-4131-a9bd-c888c85aa76e-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-w7gf4\" (UID: \"ef503d91-7432-4131-a9bd-c888c85aa76e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.293151 master-0 kubenswrapper[31456]: I0312 21:20:04.292902 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttlsd\" (UniqueName: \"kubernetes.io/projected/ef503d91-7432-4131-a9bd-c888c85aa76e-kube-api-access-ttlsd\") pod \"cert-manager-cainjector-5545bd876-w7gf4\" (UID: \"ef503d91-7432-4131-a9bd-c888c85aa76e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.322277 master-0 kubenswrapper[31456]: I0312 21:20:04.321562 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef503d91-7432-4131-a9bd-c888c85aa76e-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-w7gf4\" (UID: \"ef503d91-7432-4131-a9bd-c888c85aa76e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.345380 master-0 kubenswrapper[31456]: I0312 21:20:04.343591 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttlsd\" (UniqueName: \"kubernetes.io/projected/ef503d91-7432-4131-a9bd-c888c85aa76e-kube-api-access-ttlsd\") pod \"cert-manager-cainjector-5545bd876-w7gf4\" (UID: \"ef503d91-7432-4131-a9bd-c888c85aa76e\") " pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.394164 master-0 kubenswrapper[31456]: I0312 21:20:04.394120 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" Mar 12 21:20:04.645742 master-0 kubenswrapper[31456]: I0312 21:20:04.643508 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7"] Mar 12 21:20:04.869837 master-0 kubenswrapper[31456]: I0312 21:20:04.869760 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-w7gf4"] Mar 12 21:20:04.873020 master-0 kubenswrapper[31456]: W0312 21:20:04.872971 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef503d91_7432_4131_a9bd_c888c85aa76e.slice/crio-b22af768850f32932fe2d7613bd502f72adf3cb4c0c39b9271f4c7d04633a0ca WatchSource:0}: Error finding container b22af768850f32932fe2d7613bd502f72adf3cb4c0c39b9271f4c7d04633a0ca: Status 404 returned error can't find the container with id b22af768850f32932fe2d7613bd502f72adf3cb4c0c39b9271f4c7d04633a0ca Mar 12 21:20:04.941352 master-0 kubenswrapper[31456]: I0312 21:20:04.941223 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt"] Mar 12 21:20:04.942127 master-0 kubenswrapper[31456]: I0312 21:20:04.942094 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:04.944377 master-0 kubenswrapper[31456]: I0312 21:20:04.944335 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 21:20:04.944458 master-0 kubenswrapper[31456]: I0312 21:20:04.944354 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 21:20:04.944793 master-0 kubenswrapper[31456]: I0312 21:20:04.944766 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 21:20:04.944938 master-0 kubenswrapper[31456]: I0312 21:20:04.944917 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 21:20:04.960219 master-0 kubenswrapper[31456]: I0312 21:20:04.960175 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt"] Mar 12 21:20:05.110039 master-0 kubenswrapper[31456]: I0312 21:20:05.109984 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a084b9d0-6032-4597-9969-7ae4b74b616e-webhook-cert\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.110239 master-0 kubenswrapper[31456]: I0312 21:20:05.110123 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a084b9d0-6032-4597-9969-7ae4b74b616e-apiservice-cert\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.110239 master-0 kubenswrapper[31456]: I0312 21:20:05.110161 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztql2\" (UniqueName: \"kubernetes.io/projected/a084b9d0-6032-4597-9969-7ae4b74b616e-kube-api-access-ztql2\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.211777 master-0 kubenswrapper[31456]: I0312 21:20:05.211638 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a084b9d0-6032-4597-9969-7ae4b74b616e-webhook-cert\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.212005 master-0 kubenswrapper[31456]: I0312 21:20:05.211822 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a084b9d0-6032-4597-9969-7ae4b74b616e-apiservice-cert\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.212005 master-0 kubenswrapper[31456]: I0312 21:20:05.211861 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztql2\" (UniqueName: \"kubernetes.io/projected/a084b9d0-6032-4597-9969-7ae4b74b616e-kube-api-access-ztql2\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.216118 master-0 kubenswrapper[31456]: I0312 21:20:05.216079 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a084b9d0-6032-4597-9969-7ae4b74b616e-webhook-cert\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.229509 master-0 kubenswrapper[31456]: I0312 21:20:05.229456 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a084b9d0-6032-4597-9969-7ae4b74b616e-apiservice-cert\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.230583 master-0 kubenswrapper[31456]: I0312 21:20:05.230535 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztql2\" (UniqueName: \"kubernetes.io/projected/a084b9d0-6032-4597-9969-7ae4b74b616e-kube-api-access-ztql2\") pod \"metallb-operator-controller-manager-56948584f5-fq6pt\" (UID: \"a084b9d0-6032-4597-9969-7ae4b74b616e\") " pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.261168 master-0 kubenswrapper[31456]: I0312 21:20:05.259180 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:05.539719 master-0 kubenswrapper[31456]: I0312 21:20:05.538741 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" event={"ID":"ef503d91-7432-4131-a9bd-c888c85aa76e","Type":"ContainerStarted","Data":"b22af768850f32932fe2d7613bd502f72adf3cb4c0c39b9271f4c7d04633a0ca"} Mar 12 21:20:05.560010 master-0 kubenswrapper[31456]: I0312 21:20:05.559600 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-68977845b8-swmpq"] Mar 12 21:20:05.584858 master-0 kubenswrapper[31456]: I0312 21:20:05.584042 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" event={"ID":"9b092033-2beb-4fb5-a77d-6d962a0aaa4f","Type":"ContainerStarted","Data":"f4d33612148045754eb5b6832805d53c5720dde1bf948dfe028d013cd73e86ae"} Mar 12 21:20:05.584858 master-0 kubenswrapper[31456]: I0312 21:20:05.584181 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.592846 master-0 kubenswrapper[31456]: I0312 21:20:05.591250 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 21:20:05.592846 master-0 kubenswrapper[31456]: I0312 21:20:05.591516 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 21:20:05.618761 master-0 kubenswrapper[31456]: I0312 21:20:05.614094 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-68977845b8-swmpq"] Mar 12 21:20:05.624655 master-0 kubenswrapper[31456]: I0312 21:20:05.624232 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37bc2160-a562-4bab-b2dc-fc206fd60e04-webhook-cert\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.624655 master-0 kubenswrapper[31456]: I0312 21:20:05.624294 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4n8q\" (UniqueName: \"kubernetes.io/projected/37bc2160-a562-4bab-b2dc-fc206fd60e04-kube-api-access-l4n8q\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.624655 master-0 kubenswrapper[31456]: I0312 21:20:05.624317 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37bc2160-a562-4bab-b2dc-fc206fd60e04-apiservice-cert\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.727847 master-0 kubenswrapper[31456]: I0312 21:20:05.725995 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37bc2160-a562-4bab-b2dc-fc206fd60e04-webhook-cert\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.727847 master-0 kubenswrapper[31456]: I0312 21:20:05.726058 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4n8q\" (UniqueName: \"kubernetes.io/projected/37bc2160-a562-4bab-b2dc-fc206fd60e04-kube-api-access-l4n8q\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.727847 master-0 kubenswrapper[31456]: I0312 21:20:05.726118 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37bc2160-a562-4bab-b2dc-fc206fd60e04-apiservice-cert\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.736246 master-0 kubenswrapper[31456]: I0312 21:20:05.736034 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37bc2160-a562-4bab-b2dc-fc206fd60e04-webhook-cert\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.737901 master-0 kubenswrapper[31456]: I0312 21:20:05.737831 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt"] Mar 12 21:20:05.740200 master-0 kubenswrapper[31456]: I0312 21:20:05.740161 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37bc2160-a562-4bab-b2dc-fc206fd60e04-apiservice-cert\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.764993 master-0 kubenswrapper[31456]: I0312 21:20:05.764953 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4n8q\" (UniqueName: \"kubernetes.io/projected/37bc2160-a562-4bab-b2dc-fc206fd60e04-kube-api-access-l4n8q\") pod \"metallb-operator-webhook-server-68977845b8-swmpq\" (UID: \"37bc2160-a562-4bab-b2dc-fc206fd60e04\") " pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:05.964842 master-0 kubenswrapper[31456]: I0312 21:20:05.963149 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:06.507689 master-0 kubenswrapper[31456]: I0312 21:20:06.489354 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-68977845b8-swmpq"] Mar 12 21:20:06.514917 master-0 kubenswrapper[31456]: W0312 21:20:06.512180 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37bc2160_a562_4bab_b2dc_fc206fd60e04.slice/crio-588c20a458aa71c82df49930ba39274c2eba6b40ae5372dac6bd79ffcd678807 WatchSource:0}: Error finding container 588c20a458aa71c82df49930ba39274c2eba6b40ae5372dac6bd79ffcd678807: Status 404 returned error can't find the container with id 588c20a458aa71c82df49930ba39274c2eba6b40ae5372dac6bd79ffcd678807 Mar 12 21:20:06.589070 master-0 kubenswrapper[31456]: I0312 21:20:06.588745 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" event={"ID":"37bc2160-a562-4bab-b2dc-fc206fd60e04","Type":"ContainerStarted","Data":"588c20a458aa71c82df49930ba39274c2eba6b40ae5372dac6bd79ffcd678807"} Mar 12 21:20:06.591449 master-0 kubenswrapper[31456]: I0312 21:20:06.591414 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" event={"ID":"a084b9d0-6032-4597-9969-7ae4b74b616e","Type":"ContainerStarted","Data":"94162c291109879d524816bb1b5f7578cea14281cd1c0051ff265b7c22fd13fb"} Mar 12 21:20:08.629300 master-0 kubenswrapper[31456]: I0312 21:20:08.629217 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" event={"ID":"9b092033-2beb-4fb5-a77d-6d962a0aaa4f","Type":"ContainerStarted","Data":"32fc5b0f8b1643a3cb483fcdc8eb0089aa5e391b0ee840fead1fce5a9c70106c"} Mar 12 21:20:08.673729 master-0 kubenswrapper[31456]: I0312 21:20:08.673571 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-k5lj7" podStartSLOduration=2.390349892 podStartE2EDuration="5.67354325s" podCreationTimestamp="2026-03-12 21:20:03 +0000 UTC" firstStartedPulling="2026-03-12 21:20:04.67467441 +0000 UTC m=+665.749279738" lastFinishedPulling="2026-03-12 21:20:07.957867768 +0000 UTC m=+669.032473096" observedRunningTime="2026-03-12 21:20:08.668900087 +0000 UTC m=+669.743505415" watchObservedRunningTime="2026-03-12 21:20:08.67354325 +0000 UTC m=+669.748148578" Mar 12 21:20:13.207558 master-0 kubenswrapper[31456]: I0312 21:20:13.207438 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-j6b5k"] Mar 12 21:20:13.208374 master-0 kubenswrapper[31456]: I0312 21:20:13.208349 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.235188 master-0 kubenswrapper[31456]: I0312 21:20:13.235145 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-j6b5k"] Mar 12 21:20:13.314783 master-0 kubenswrapper[31456]: I0312 21:20:13.314495 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flz7d\" (UniqueName: \"kubernetes.io/projected/878f39e9-04dc-445c-9f18-49dc4d155e1b-kube-api-access-flz7d\") pod \"cert-manager-545d4d4674-j6b5k\" (UID: \"878f39e9-04dc-445c-9f18-49dc4d155e1b\") " pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.314783 master-0 kubenswrapper[31456]: I0312 21:20:13.314594 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878f39e9-04dc-445c-9f18-49dc4d155e1b-bound-sa-token\") pod \"cert-manager-545d4d4674-j6b5k\" (UID: \"878f39e9-04dc-445c-9f18-49dc4d155e1b\") " pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.419827 master-0 kubenswrapper[31456]: I0312 21:20:13.419247 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flz7d\" (UniqueName: \"kubernetes.io/projected/878f39e9-04dc-445c-9f18-49dc4d155e1b-kube-api-access-flz7d\") pod \"cert-manager-545d4d4674-j6b5k\" (UID: \"878f39e9-04dc-445c-9f18-49dc4d155e1b\") " pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.419827 master-0 kubenswrapper[31456]: I0312 21:20:13.419340 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878f39e9-04dc-445c-9f18-49dc4d155e1b-bound-sa-token\") pod \"cert-manager-545d4d4674-j6b5k\" (UID: \"878f39e9-04dc-445c-9f18-49dc4d155e1b\") " pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.463684 master-0 kubenswrapper[31456]: I0312 21:20:13.460750 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flz7d\" (UniqueName: \"kubernetes.io/projected/878f39e9-04dc-445c-9f18-49dc4d155e1b-kube-api-access-flz7d\") pod \"cert-manager-545d4d4674-j6b5k\" (UID: \"878f39e9-04dc-445c-9f18-49dc4d155e1b\") " pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.478507 master-0 kubenswrapper[31456]: I0312 21:20:13.478408 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/878f39e9-04dc-445c-9f18-49dc4d155e1b-bound-sa-token\") pod \"cert-manager-545d4d4674-j6b5k\" (UID: \"878f39e9-04dc-445c-9f18-49dc4d155e1b\") " pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:13.604437 master-0 kubenswrapper[31456]: I0312 21:20:13.604141 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-j6b5k" Mar 12 21:20:17.143672 master-0 kubenswrapper[31456]: I0312 21:20:17.143623 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-j6b5k"] Mar 12 21:20:17.146026 master-0 kubenswrapper[31456]: W0312 21:20:17.145974 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod878f39e9_04dc_445c_9f18_49dc4d155e1b.slice/crio-9b09337f14d519c522e5c69f4cb84412b7ee57efbc17a4b8aacb4cd986db3072 WatchSource:0}: Error finding container 9b09337f14d519c522e5c69f4cb84412b7ee57efbc17a4b8aacb4cd986db3072: Status 404 returned error can't find the container with id 9b09337f14d519c522e5c69f4cb84412b7ee57efbc17a4b8aacb4cd986db3072 Mar 12 21:20:17.745114 master-0 kubenswrapper[31456]: I0312 21:20:17.744998 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" event={"ID":"37bc2160-a562-4bab-b2dc-fc206fd60e04","Type":"ContainerStarted","Data":"fa013b28e419df9c01bb5bf2993be769d68531c9336840ad20430eea6b475444"} Mar 12 21:20:17.747997 master-0 kubenswrapper[31456]: I0312 21:20:17.747915 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" event={"ID":"4ae89e71-9f7b-48b6-8e21-5e7f46739ce0","Type":"ContainerStarted","Data":"35165f4ba5a9ea4e5745f5f1cb2dfdaec021a51f62fddda0d68c6f399c35664f"} Mar 12 21:20:17.748203 master-0 kubenswrapper[31456]: I0312 21:20:17.748157 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:17.750448 master-0 kubenswrapper[31456]: I0312 21:20:17.750365 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" event={"ID":"a084b9d0-6032-4597-9969-7ae4b74b616e","Type":"ContainerStarted","Data":"3bf292389b8c5efd55ebba42e17455f1d7ffa23be50d1dce697e0ac6f17f8901"} Mar 12 21:20:17.751085 master-0 kubenswrapper[31456]: I0312 21:20:17.751006 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:17.752710 master-0 kubenswrapper[31456]: I0312 21:20:17.752653 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-j6b5k" event={"ID":"878f39e9-04dc-445c-9f18-49dc4d155e1b","Type":"ContainerStarted","Data":"80421e27bca6b790517179c82e5af9cd4a6627bf87a757bbad258b103ee622cc"} Mar 12 21:20:17.752710 master-0 kubenswrapper[31456]: I0312 21:20:17.752709 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-j6b5k" event={"ID":"878f39e9-04dc-445c-9f18-49dc4d155e1b","Type":"ContainerStarted","Data":"9b09337f14d519c522e5c69f4cb84412b7ee57efbc17a4b8aacb4cd986db3072"} Mar 12 21:20:17.754661 master-0 kubenswrapper[31456]: I0312 21:20:17.754617 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" event={"ID":"ef503d91-7432-4131-a9bd-c888c85aa76e","Type":"ContainerStarted","Data":"aff71c3884514c1a30a5d89d6ae5f0a3cb8aa9837fc070415af6c2be281aaade"} Mar 12 21:20:17.788668 master-0 kubenswrapper[31456]: I0312 21:20:17.788537 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" podStartSLOduration=2.557551408 podStartE2EDuration="12.788503425s" podCreationTimestamp="2026-03-12 21:20:05 +0000 UTC" firstStartedPulling="2026-03-12 21:20:06.523192844 +0000 UTC m=+667.597798172" lastFinishedPulling="2026-03-12 21:20:16.754144841 +0000 UTC m=+677.828750189" observedRunningTime="2026-03-12 21:20:17.777569459 +0000 UTC m=+678.852174857" watchObservedRunningTime="2026-03-12 21:20:17.788503425 +0000 UTC m=+678.863108793" Mar 12 21:20:17.824169 master-0 kubenswrapper[31456]: I0312 21:20:17.824027 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-w7gf4" podStartSLOduration=1.988172184 podStartE2EDuration="13.824000515s" podCreationTimestamp="2026-03-12 21:20:04 +0000 UTC" firstStartedPulling="2026-03-12 21:20:04.87588433 +0000 UTC m=+665.950489658" lastFinishedPulling="2026-03-12 21:20:16.711712661 +0000 UTC m=+677.786317989" observedRunningTime="2026-03-12 21:20:17.806965802 +0000 UTC m=+678.881571130" watchObservedRunningTime="2026-03-12 21:20:17.824000515 +0000 UTC m=+678.898605873" Mar 12 21:20:17.862948 master-0 kubenswrapper[31456]: I0312 21:20:17.862749 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-j6b5k" podStartSLOduration=4.862724794 podStartE2EDuration="4.862724794s" podCreationTimestamp="2026-03-12 21:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:20:17.828425612 +0000 UTC m=+678.903030980" watchObservedRunningTime="2026-03-12 21:20:17.862724794 +0000 UTC m=+678.937330142" Mar 12 21:20:17.900099 master-0 kubenswrapper[31456]: I0312 21:20:17.899013 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" podStartSLOduration=2.961147112 podStartE2EDuration="13.898992413s" podCreationTimestamp="2026-03-12 21:20:04 +0000 UTC" firstStartedPulling="2026-03-12 21:20:05.772882347 +0000 UTC m=+666.847487675" lastFinishedPulling="2026-03-12 21:20:16.710727648 +0000 UTC m=+677.785332976" observedRunningTime="2026-03-12 21:20:17.884069072 +0000 UTC m=+678.958674410" watchObservedRunningTime="2026-03-12 21:20:17.898992413 +0000 UTC m=+678.973597741" Mar 12 21:20:17.950306 master-0 kubenswrapper[31456]: I0312 21:20:17.949522 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" podStartSLOduration=2.449440286 podStartE2EDuration="16.949503618s" podCreationTimestamp="2026-03-12 21:20:01 +0000 UTC" firstStartedPulling="2026-03-12 21:20:02.174251363 +0000 UTC m=+663.248856681" lastFinishedPulling="2026-03-12 21:20:16.674314685 +0000 UTC m=+677.748920013" observedRunningTime="2026-03-12 21:20:17.948356531 +0000 UTC m=+679.022961859" watchObservedRunningTime="2026-03-12 21:20:17.949503618 +0000 UTC m=+679.024108946" Mar 12 21:20:18.335847 master-0 kubenswrapper[31456]: I0312 21:20:18.335766 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k"] Mar 12 21:20:18.336914 master-0 kubenswrapper[31456]: I0312 21:20:18.336887 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" Mar 12 21:20:18.339565 master-0 kubenswrapper[31456]: I0312 21:20:18.339516 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 12 21:20:18.339929 master-0 kubenswrapper[31456]: I0312 21:20:18.339902 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 12 21:20:18.359068 master-0 kubenswrapper[31456]: I0312 21:20:18.359004 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k"] Mar 12 21:20:18.416836 master-0 kubenswrapper[31456]: I0312 21:20:18.415842 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zhj\" (UniqueName: \"kubernetes.io/projected/1ce3b723-c22e-4fed-837f-c288dd1cdd5d-kube-api-access-m6zhj\") pod \"obo-prometheus-operator-68bc856cb9-v4l9k\" (UID: \"1ce3b723-c22e-4fed-837f-c288dd1cdd5d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" Mar 12 21:20:18.517659 master-0 kubenswrapper[31456]: I0312 21:20:18.517592 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zhj\" (UniqueName: \"kubernetes.io/projected/1ce3b723-c22e-4fed-837f-c288dd1cdd5d-kube-api-access-m6zhj\") pod \"obo-prometheus-operator-68bc856cb9-v4l9k\" (UID: \"1ce3b723-c22e-4fed-837f-c288dd1cdd5d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" Mar 12 21:20:18.548358 master-0 kubenswrapper[31456]: I0312 21:20:18.548310 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zhj\" (UniqueName: \"kubernetes.io/projected/1ce3b723-c22e-4fed-837f-c288dd1cdd5d-kube-api-access-m6zhj\") pod \"obo-prometheus-operator-68bc856cb9-v4l9k\" (UID: \"1ce3b723-c22e-4fed-837f-c288dd1cdd5d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" Mar 12 21:20:18.553131 master-0 kubenswrapper[31456]: I0312 21:20:18.553089 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7"] Mar 12 21:20:18.554143 master-0 kubenswrapper[31456]: I0312 21:20:18.554124 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.557356 master-0 kubenswrapper[31456]: I0312 21:20:18.557222 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 12 21:20:18.562944 master-0 kubenswrapper[31456]: I0312 21:20:18.562801 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf"] Mar 12 21:20:18.563890 master-0 kubenswrapper[31456]: I0312 21:20:18.563858 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.573621 master-0 kubenswrapper[31456]: I0312 21:20:18.573554 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7"] Mar 12 21:20:18.603594 master-0 kubenswrapper[31456]: I0312 21:20:18.603544 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf"] Mar 12 21:20:18.620047 master-0 kubenswrapper[31456]: I0312 21:20:18.620001 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f9c03cb-0a6e-4605-8a53-695249ae7943-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf\" (UID: \"2f9c03cb-0a6e-4605-8a53-695249ae7943\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.620323 master-0 kubenswrapper[31456]: I0312 21:20:18.620307 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f9c03cb-0a6e-4605-8a53-695249ae7943-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf\" (UID: \"2f9c03cb-0a6e-4605-8a53-695249ae7943\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.620417 master-0 kubenswrapper[31456]: I0312 21:20:18.620404 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7\" (UID: \"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.620498 master-0 kubenswrapper[31456]: I0312 21:20:18.620485 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7\" (UID: \"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.653610 master-0 kubenswrapper[31456]: I0312 21:20:18.653548 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" Mar 12 21:20:18.723228 master-0 kubenswrapper[31456]: I0312 21:20:18.722097 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f9c03cb-0a6e-4605-8a53-695249ae7943-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf\" (UID: \"2f9c03cb-0a6e-4605-8a53-695249ae7943\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.723228 master-0 kubenswrapper[31456]: I0312 21:20:18.722158 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f9c03cb-0a6e-4605-8a53-695249ae7943-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf\" (UID: \"2f9c03cb-0a6e-4605-8a53-695249ae7943\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.723228 master-0 kubenswrapper[31456]: I0312 21:20:18.722348 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7\" (UID: \"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.723228 master-0 kubenswrapper[31456]: I0312 21:20:18.722383 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7\" (UID: \"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.726341 master-0 kubenswrapper[31456]: I0312 21:20:18.726300 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f9c03cb-0a6e-4605-8a53-695249ae7943-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf\" (UID: \"2f9c03cb-0a6e-4605-8a53-695249ae7943\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.727059 master-0 kubenswrapper[31456]: I0312 21:20:18.727020 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f9c03cb-0a6e-4605-8a53-695249ae7943-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf\" (UID: \"2f9c03cb-0a6e-4605-8a53-695249ae7943\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.728355 master-0 kubenswrapper[31456]: I0312 21:20:18.728314 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7\" (UID: \"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.730748 master-0 kubenswrapper[31456]: I0312 21:20:18.728752 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7\" (UID: \"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.780836 master-0 kubenswrapper[31456]: I0312 21:20:18.777636 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:18.793822 master-0 kubenswrapper[31456]: I0312 21:20:18.790644 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w46fm"] Mar 12 21:20:18.793822 master-0 kubenswrapper[31456]: I0312 21:20:18.791690 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:18.795620 master-0 kubenswrapper[31456]: I0312 21:20:18.795125 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 12 21:20:18.809648 master-0 kubenswrapper[31456]: I0312 21:20:18.806759 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w46fm"] Mar 12 21:20:18.829245 master-0 kubenswrapper[31456]: I0312 21:20:18.824779 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eb04ca9-6603-41d1-b2b1-8858b953a30b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w46fm\" (UID: \"7eb04ca9-6603-41d1-b2b1-8858b953a30b\") " pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:18.829245 master-0 kubenswrapper[31456]: I0312 21:20:18.824888 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnc5\" (UniqueName: \"kubernetes.io/projected/7eb04ca9-6603-41d1-b2b1-8858b953a30b-kube-api-access-fcnc5\") pod \"observability-operator-59bdc8b94-w46fm\" (UID: \"7eb04ca9-6603-41d1-b2b1-8858b953a30b\") " pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:18.888921 master-0 kubenswrapper[31456]: I0312 21:20:18.887453 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" Mar 12 21:20:18.914113 master-0 kubenswrapper[31456]: I0312 21:20:18.914059 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" Mar 12 21:20:18.941112 master-0 kubenswrapper[31456]: I0312 21:20:18.941053 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eb04ca9-6603-41d1-b2b1-8858b953a30b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w46fm\" (UID: \"7eb04ca9-6603-41d1-b2b1-8858b953a30b\") " pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:18.941112 master-0 kubenswrapper[31456]: I0312 21:20:18.941129 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnc5\" (UniqueName: \"kubernetes.io/projected/7eb04ca9-6603-41d1-b2b1-8858b953a30b-kube-api-access-fcnc5\") pod \"observability-operator-59bdc8b94-w46fm\" (UID: \"7eb04ca9-6603-41d1-b2b1-8858b953a30b\") " pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:18.949568 master-0 kubenswrapper[31456]: I0312 21:20:18.949517 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eb04ca9-6603-41d1-b2b1-8858b953a30b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w46fm\" (UID: \"7eb04ca9-6603-41d1-b2b1-8858b953a30b\") " pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:18.968845 master-0 kubenswrapper[31456]: I0312 21:20:18.968786 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9564f"] Mar 12 21:20:18.982856 master-0 kubenswrapper[31456]: I0312 21:20:18.969654 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:18.982856 master-0 kubenswrapper[31456]: I0312 21:20:18.970180 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnc5\" (UniqueName: \"kubernetes.io/projected/7eb04ca9-6603-41d1-b2b1-8858b953a30b-kube-api-access-fcnc5\") pod \"observability-operator-59bdc8b94-w46fm\" (UID: \"7eb04ca9-6603-41d1-b2b1-8858b953a30b\") " pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:19.017015 master-0 kubenswrapper[31456]: I0312 21:20:19.015675 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9564f"] Mar 12 21:20:19.042939 master-0 kubenswrapper[31456]: I0312 21:20:19.042881 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknqv\" (UniqueName: \"kubernetes.io/projected/0887fb99-34aa-4699-b395-b6b48298bd02-kube-api-access-nknqv\") pod \"perses-operator-5bf474d74f-9564f\" (UID: \"0887fb99-34aa-4699-b395-b6b48298bd02\") " pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.043214 master-0 kubenswrapper[31456]: I0312 21:20:19.043033 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0887fb99-34aa-4699-b395-b6b48298bd02-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9564f\" (UID: \"0887fb99-34aa-4699-b395-b6b48298bd02\") " pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.145010 master-0 kubenswrapper[31456]: I0312 21:20:19.144100 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0887fb99-34aa-4699-b395-b6b48298bd02-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9564f\" (UID: \"0887fb99-34aa-4699-b395-b6b48298bd02\") " pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.145010 master-0 kubenswrapper[31456]: I0312 21:20:19.144193 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nknqv\" (UniqueName: \"kubernetes.io/projected/0887fb99-34aa-4699-b395-b6b48298bd02-kube-api-access-nknqv\") pod \"perses-operator-5bf474d74f-9564f\" (UID: \"0887fb99-34aa-4699-b395-b6b48298bd02\") " pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.145542 master-0 kubenswrapper[31456]: I0312 21:20:19.145509 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0887fb99-34aa-4699-b395-b6b48298bd02-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9564f\" (UID: \"0887fb99-34aa-4699-b395-b6b48298bd02\") " pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.159102 master-0 kubenswrapper[31456]: I0312 21:20:19.158683 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:19.202889 master-0 kubenswrapper[31456]: I0312 21:20:19.202763 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nknqv\" (UniqueName: \"kubernetes.io/projected/0887fb99-34aa-4699-b395-b6b48298bd02-kube-api-access-nknqv\") pod \"perses-operator-5bf474d74f-9564f\" (UID: \"0887fb99-34aa-4699-b395-b6b48298bd02\") " pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.211385 master-0 kubenswrapper[31456]: W0312 21:20:19.211304 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ce3b723_c22e_4fed_837f_c288dd1cdd5d.slice/crio-f99b001b1ea43a56745a336c515129c736ced09b463170abf7c83d8d363d0970 WatchSource:0}: Error finding container f99b001b1ea43a56745a336c515129c736ced09b463170abf7c83d8d363d0970: Status 404 returned error can't find the container with id f99b001b1ea43a56745a336c515129c736ced09b463170abf7c83d8d363d0970 Mar 12 21:20:19.213945 master-0 kubenswrapper[31456]: I0312 21:20:19.213117 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k"] Mar 12 21:20:19.316919 master-0 kubenswrapper[31456]: I0312 21:20:19.316525 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:19.390589 master-0 kubenswrapper[31456]: I0312 21:20:19.390512 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7"] Mar 12 21:20:19.529173 master-0 kubenswrapper[31456]: I0312 21:20:19.529077 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf"] Mar 12 21:20:19.754908 master-0 kubenswrapper[31456]: I0312 21:20:19.751118 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w46fm"] Mar 12 21:20:19.825834 master-0 kubenswrapper[31456]: I0312 21:20:19.821822 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" event={"ID":"7eb04ca9-6603-41d1-b2b1-8858b953a30b","Type":"ContainerStarted","Data":"c426bb75bbaaf2c71d048c8711e9136b355d887b5a5c348cf74e31c6b6925ef1"} Mar 12 21:20:19.841119 master-0 kubenswrapper[31456]: I0312 21:20:19.841046 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" event={"ID":"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb","Type":"ContainerStarted","Data":"3fcabcd06d48830d88d61e52c755f34af233d4ba085aef6467a80199215bc6ec"} Mar 12 21:20:19.855112 master-0 kubenswrapper[31456]: I0312 21:20:19.855036 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" event={"ID":"2f9c03cb-0a6e-4605-8a53-695249ae7943","Type":"ContainerStarted","Data":"c6cfffabef8e0586971c586880514c73157518e01e537724b37cc980836295de"} Mar 12 21:20:19.865071 master-0 kubenswrapper[31456]: I0312 21:20:19.863544 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" event={"ID":"1ce3b723-c22e-4fed-837f-c288dd1cdd5d","Type":"ContainerStarted","Data":"f99b001b1ea43a56745a336c515129c736ced09b463170abf7c83d8d363d0970"} Mar 12 21:20:19.878628 master-0 kubenswrapper[31456]: I0312 21:20:19.878536 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9564f"] Mar 12 21:20:20.911926 master-0 kubenswrapper[31456]: I0312 21:20:20.904925 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9564f" event={"ID":"0887fb99-34aa-4699-b395-b6b48298bd02","Type":"ContainerStarted","Data":"ba72d0fff056def7de208b9b72557a73ef3b950a4ee95889b2b87e2d8c1050ed"} Mar 12 21:20:26.638902 master-0 kubenswrapper[31456]: I0312 21:20:26.638688 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-w59gc" Mar 12 21:20:28.000592 master-0 kubenswrapper[31456]: I0312 21:20:28.000262 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" event={"ID":"2f9c03cb-0a6e-4605-8a53-695249ae7943","Type":"ContainerStarted","Data":"99a8386f31dc5e698844e4ed129630cccb9a362061b31a5ff0ef41ce1012174e"} Mar 12 21:20:28.003041 master-0 kubenswrapper[31456]: I0312 21:20:28.002998 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" event={"ID":"1ce3b723-c22e-4fed-837f-c288dd1cdd5d","Type":"ContainerStarted","Data":"182e32506f0712074949157f9fca0a811358a50d368446bdcdcc4c7c2ab2c3c8"} Mar 12 21:20:28.006753 master-0 kubenswrapper[31456]: I0312 21:20:28.006651 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" event={"ID":"7eb04ca9-6603-41d1-b2b1-8858b953a30b","Type":"ContainerStarted","Data":"6ea3c142330baab6ffd63fd9076192a1cba996ab3896bb56dfd22f8440c844d1"} Mar 12 21:20:28.010827 master-0 kubenswrapper[31456]: I0312 21:20:28.007437 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:28.010827 master-0 kubenswrapper[31456]: I0312 21:20:28.008359 31456 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-w46fm container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.126:8081/healthz\": dial tcp 10.128.0.126:8081: connect: connection refused" start-of-body= Mar 12 21:20:28.010827 master-0 kubenswrapper[31456]: I0312 21:20:28.008400 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" podUID="7eb04ca9-6603-41d1-b2b1-8858b953a30b" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.126:8081/healthz\": dial tcp 10.128.0.126:8081: connect: connection refused" Mar 12 21:20:28.010827 master-0 kubenswrapper[31456]: I0312 21:20:28.009690 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" event={"ID":"22de3a9a-836e-4f1c-91ca-aa81a2fbf1cb","Type":"ContainerStarted","Data":"9746c6669debe275edc2ac0e7aba2d1b05b04049eb7ad4adf7a17077ec8394a9"} Mar 12 21:20:28.026832 master-0 kubenswrapper[31456]: I0312 21:20:28.024152 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9564f" event={"ID":"0887fb99-34aa-4699-b395-b6b48298bd02","Type":"ContainerStarted","Data":"b9eb8a27a7aae68052dcf8f81ab59b0879bfee88f6d0a7db5abad5a63dd42009"} Mar 12 21:20:28.026832 master-0 kubenswrapper[31456]: I0312 21:20:28.025141 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:28.029827 master-0 kubenswrapper[31456]: I0312 21:20:28.026353 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-jvvgf" podStartSLOduration=2.04723592 podStartE2EDuration="10.026328068s" podCreationTimestamp="2026-03-12 21:20:18 +0000 UTC" firstStartedPulling="2026-03-12 21:20:19.531026731 +0000 UTC m=+680.605632059" lastFinishedPulling="2026-03-12 21:20:27.510118869 +0000 UTC m=+688.584724207" observedRunningTime="2026-03-12 21:20:28.021775307 +0000 UTC m=+689.096380665" watchObservedRunningTime="2026-03-12 21:20:28.026328068 +0000 UTC m=+689.100933436" Mar 12 21:20:28.065843 master-0 kubenswrapper[31456]: I0312 21:20:28.064349 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5549f5dcc9-pkcj7" podStartSLOduration=2.015835648 podStartE2EDuration="10.064329339s" podCreationTimestamp="2026-03-12 21:20:18 +0000 UTC" firstStartedPulling="2026-03-12 21:20:19.44026964 +0000 UTC m=+680.514874968" lastFinishedPulling="2026-03-12 21:20:27.488763331 +0000 UTC m=+688.563368659" observedRunningTime="2026-03-12 21:20:28.042037839 +0000 UTC m=+689.116643187" watchObservedRunningTime="2026-03-12 21:20:28.064329339 +0000 UTC m=+689.138934687" Mar 12 21:20:28.120778 master-0 kubenswrapper[31456]: I0312 21:20:28.120694 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-v4l9k" podStartSLOduration=1.843776556 podStartE2EDuration="10.120675165s" podCreationTimestamp="2026-03-12 21:20:18 +0000 UTC" firstStartedPulling="2026-03-12 21:20:19.213976883 +0000 UTC m=+680.288582211" lastFinishedPulling="2026-03-12 21:20:27.490875492 +0000 UTC m=+688.565480820" observedRunningTime="2026-03-12 21:20:28.089785906 +0000 UTC m=+689.164391234" watchObservedRunningTime="2026-03-12 21:20:28.120675165 +0000 UTC m=+689.195280493" Mar 12 21:20:28.148997 master-0 kubenswrapper[31456]: I0312 21:20:28.147671 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" podStartSLOduration=2.388480705 podStartE2EDuration="10.147640559s" podCreationTimestamp="2026-03-12 21:20:18 +0000 UTC" firstStartedPulling="2026-03-12 21:20:19.79613326 +0000 UTC m=+680.870738588" lastFinishedPulling="2026-03-12 21:20:27.555293104 +0000 UTC m=+688.629898442" observedRunningTime="2026-03-12 21:20:28.130979005 +0000 UTC m=+689.205584343" watchObservedRunningTime="2026-03-12 21:20:28.147640559 +0000 UTC m=+689.222245887" Mar 12 21:20:29.035194 master-0 kubenswrapper[31456]: I0312 21:20:29.035113 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-w46fm" Mar 12 21:20:29.077200 master-0 kubenswrapper[31456]: I0312 21:20:29.077113 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-9564f" podStartSLOduration=3.455604954 podStartE2EDuration="11.077090509s" podCreationTimestamp="2026-03-12 21:20:18 +0000 UTC" firstStartedPulling="2026-03-12 21:20:19.889154356 +0000 UTC m=+680.963759684" lastFinishedPulling="2026-03-12 21:20:27.510639911 +0000 UTC m=+688.585245239" observedRunningTime="2026-03-12 21:20:28.156424762 +0000 UTC m=+689.231030090" watchObservedRunningTime="2026-03-12 21:20:29.077090509 +0000 UTC m=+690.151695837" Mar 12 21:20:35.967566 master-0 kubenswrapper[31456]: I0312 21:20:35.967511 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-68977845b8-swmpq" Mar 12 21:20:39.320790 master-0 kubenswrapper[31456]: I0312 21:20:39.320705 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-9564f" Mar 12 21:20:55.263187 master-0 kubenswrapper[31456]: I0312 21:20:55.263099 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56948584f5-fq6pt" Mar 12 21:20:59.766838 master-0 kubenswrapper[31456]: I0312 21:20:59.764019 31456 scope.go:117] "RemoveContainer" containerID="0b060c904cf7244304798fca1e2e5fa54709b958c12481b7403d731a220633b8" Mar 12 21:21:02.410828 master-0 kubenswrapper[31456]: I0312 21:21:02.410296 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r"] Mar 12 21:21:02.416819 master-0 kubenswrapper[31456]: I0312 21:21:02.412427 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.428832 master-0 kubenswrapper[31456]: I0312 21:21:02.423467 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 21:21:02.435824 master-0 kubenswrapper[31456]: I0312 21:21:02.429467 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r"] Mar 12 21:21:02.466827 master-0 kubenswrapper[31456]: I0312 21:21:02.461551 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-82tkh"] Mar 12 21:21:02.466827 master-0 kubenswrapper[31456]: I0312 21:21:02.466721 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.487826 master-0 kubenswrapper[31456]: I0312 21:21:02.470121 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 21:21:02.487826 master-0 kubenswrapper[31456]: I0312 21:21:02.470328 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 21:21:02.557828 master-0 kubenswrapper[31456]: I0312 21:21:02.548798 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-94r48"] Mar 12 21:21:02.557828 master-0 kubenswrapper[31456]: I0312 21:21:02.552043 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-94r48" Mar 12 21:21:02.557828 master-0 kubenswrapper[31456]: I0312 21:21:02.555367 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 21:21:02.557828 master-0 kubenswrapper[31456]: I0312 21:21:02.556127 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 21:21:02.557828 master-0 kubenswrapper[31456]: I0312 21:21:02.556316 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 21:21:02.563827 master-0 kubenswrapper[31456]: I0312 21:21:02.560994 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-vpwbb"] Mar 12 21:21:02.563827 master-0 kubenswrapper[31456]: I0312 21:21:02.563516 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.566179 master-0 kubenswrapper[31456]: I0312 21:21:02.566112 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 21:21:02.572520 master-0 kubenswrapper[31456]: I0312 21:21:02.571160 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-vpwbb"] Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582044 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-metrics\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582123 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-sockets\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582269 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6shm\" (UniqueName: \"kubernetes.io/projected/d603b656-2e01-46dd-ac33-a148ec4f0bf3-kube-api-access-c6shm\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582323 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afd19539-2c72-4f92-b25c-a1502472b3c8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-h299r\" (UID: \"afd19539-2c72-4f92-b25c-a1502472b3c8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582367 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vkv\" (UniqueName: \"kubernetes.io/projected/afd19539-2c72-4f92-b25c-a1502472b3c8-kube-api-access-g6vkv\") pod \"frr-k8s-webhook-server-bcc4b6f68-h299r\" (UID: \"afd19539-2c72-4f92-b25c-a1502472b3c8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582422 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-reloader\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582453 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-conf\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582692 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603b656-2e01-46dd-ac33-a148ec4f0bf3-metrics-certs\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.586981 master-0 kubenswrapper[31456]: I0312 21:21:02.582714 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-startup\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.684131 master-0 kubenswrapper[31456]: I0312 21:21:02.683994 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56de1926-b04f-4c4f-b247-ca3f1c0303e3-metrics-certs\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.684131 master-0 kubenswrapper[31456]: I0312 21:21:02.684101 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.684131 master-0 kubenswrapper[31456]: I0312 21:21:02.684132 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603b656-2e01-46dd-ac33-a148ec4f0bf3-metrics-certs\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.684417 master-0 kubenswrapper[31456]: I0312 21:21:02.684151 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhps\" (UniqueName: \"kubernetes.io/projected/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-kube-api-access-qfhps\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.684417 master-0 kubenswrapper[31456]: I0312 21:21:02.684174 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-startup\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.684417 master-0 kubenswrapper[31456]: I0312 21:21:02.684188 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56de1926-b04f-4c4f-b247-ca3f1c0303e3-cert\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.684741 master-0 kubenswrapper[31456]: I0312 21:21:02.684671 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-metrics\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.684870 master-0 kubenswrapper[31456]: I0312 21:21:02.684837 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-sockets\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685058 master-0 kubenswrapper[31456]: I0312 21:21:02.685030 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-metallb-excludel2\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.685129 master-0 kubenswrapper[31456]: I0312 21:21:02.685088 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-metrics\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685129 master-0 kubenswrapper[31456]: I0312 21:21:02.685117 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6shm\" (UniqueName: \"kubernetes.io/projected/d603b656-2e01-46dd-ac33-a148ec4f0bf3-kube-api-access-c6shm\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685294 master-0 kubenswrapper[31456]: I0312 21:21:02.685268 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afd19539-2c72-4f92-b25c-a1502472b3c8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-h299r\" (UID: \"afd19539-2c72-4f92-b25c-a1502472b3c8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.685342 master-0 kubenswrapper[31456]: I0312 21:21:02.685267 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-sockets\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685376 master-0 kubenswrapper[31456]: I0312 21:21:02.685324 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-metrics-certs\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.685429 master-0 kubenswrapper[31456]: I0312 21:21:02.685404 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vkv\" (UniqueName: \"kubernetes.io/projected/afd19539-2c72-4f92-b25c-a1502472b3c8-kube-api-access-g6vkv\") pod \"frr-k8s-webhook-server-bcc4b6f68-h299r\" (UID: \"afd19539-2c72-4f92-b25c-a1502472b3c8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.685469 master-0 kubenswrapper[31456]: I0312 21:21:02.685449 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmwnr\" (UniqueName: \"kubernetes.io/projected/56de1926-b04f-4c4f-b247-ca3f1c0303e3-kube-api-access-cmwnr\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.685502 master-0 kubenswrapper[31456]: I0312 21:21:02.685470 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-reloader\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685541 master-0 kubenswrapper[31456]: I0312 21:21:02.685514 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-startup\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685579 master-0 kubenswrapper[31456]: I0312 21:21:02.685560 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-conf\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.685743 master-0 kubenswrapper[31456]: I0312 21:21:02.685716 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-reloader\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.688153 master-0 kubenswrapper[31456]: I0312 21:21:02.688101 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d603b656-2e01-46dd-ac33-a148ec4f0bf3-frr-conf\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.688676 master-0 kubenswrapper[31456]: I0312 21:21:02.688650 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d603b656-2e01-46dd-ac33-a148ec4f0bf3-metrics-certs\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.689995 master-0 kubenswrapper[31456]: I0312 21:21:02.689958 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/afd19539-2c72-4f92-b25c-a1502472b3c8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-h299r\" (UID: \"afd19539-2c72-4f92-b25c-a1502472b3c8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.701254 master-0 kubenswrapper[31456]: I0312 21:21:02.701219 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6shm\" (UniqueName: \"kubernetes.io/projected/d603b656-2e01-46dd-ac33-a148ec4f0bf3-kube-api-access-c6shm\") pod \"frr-k8s-82tkh\" (UID: \"d603b656-2e01-46dd-ac33-a148ec4f0bf3\") " pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.705528 master-0 kubenswrapper[31456]: I0312 21:21:02.705480 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vkv\" (UniqueName: \"kubernetes.io/projected/afd19539-2c72-4f92-b25c-a1502472b3c8-kube-api-access-g6vkv\") pod \"frr-k8s-webhook-server-bcc4b6f68-h299r\" (UID: \"afd19539-2c72-4f92-b25c-a1502472b3c8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.741263 master-0 kubenswrapper[31456]: I0312 21:21:02.741202 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786656 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmwnr\" (UniqueName: \"kubernetes.io/projected/56de1926-b04f-4c4f-b247-ca3f1c0303e3-kube-api-access-cmwnr\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786733 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56de1926-b04f-4c4f-b247-ca3f1c0303e3-metrics-certs\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786792 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786824 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhps\" (UniqueName: \"kubernetes.io/projected/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-kube-api-access-qfhps\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786840 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56de1926-b04f-4c4f-b247-ca3f1c0303e3-cert\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786871 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-metallb-excludel2\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.787393 master-0 kubenswrapper[31456]: I0312 21:21:02.786893 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-metrics-certs\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.788001 master-0 kubenswrapper[31456]: I0312 21:21:02.787979 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-metallb-excludel2\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.788429 master-0 kubenswrapper[31456]: E0312 21:21:02.788391 31456 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 21:21:02.788504 master-0 kubenswrapper[31456]: E0312 21:21:02.788437 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist podName:bf8a11b3-a328-47f4-8dc6-6b3dad8d256d nodeName:}" failed. No retries permitted until 2026-03-12 21:21:03.288424008 +0000 UTC m=+724.363029336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist") pod "speaker-94r48" (UID: "bf8a11b3-a328-47f4-8dc6-6b3dad8d256d") : secret "metallb-memberlist" not found Mar 12 21:21:02.794737 master-0 kubenswrapper[31456]: I0312 21:21:02.792561 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56de1926-b04f-4c4f-b247-ca3f1c0303e3-metrics-certs\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.794737 master-0 kubenswrapper[31456]: I0312 21:21:02.793261 31456 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 21:21:02.794737 master-0 kubenswrapper[31456]: I0312 21:21:02.793335 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-metrics-certs\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.800413 master-0 kubenswrapper[31456]: I0312 21:21:02.800383 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56de1926-b04f-4c4f-b247-ca3f1c0303e3-cert\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.802535 master-0 kubenswrapper[31456]: I0312 21:21:02.802489 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmwnr\" (UniqueName: \"kubernetes.io/projected/56de1926-b04f-4c4f-b247-ca3f1c0303e3-kube-api-access-cmwnr\") pod \"controller-7bb4cc7c98-vpwbb\" (UID: \"56de1926-b04f-4c4f-b247-ca3f1c0303e3\") " pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.805836 master-0 kubenswrapper[31456]: I0312 21:21:02.805026 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhps\" (UniqueName: \"kubernetes.io/projected/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-kube-api-access-qfhps\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:02.851431 master-0 kubenswrapper[31456]: I0312 21:21:02.851379 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:02.930752 master-0 kubenswrapper[31456]: I0312 21:21:02.930699 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:02.973423 master-0 kubenswrapper[31456]: I0312 21:21:02.973383 31456 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 21:21:03.164863 master-0 kubenswrapper[31456]: I0312 21:21:03.164775 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r"] Mar 12 21:21:03.307172 master-0 kubenswrapper[31456]: I0312 21:21:03.307026 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:03.307172 master-0 kubenswrapper[31456]: E0312 21:21:03.307154 31456 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 21:21:03.307424 master-0 kubenswrapper[31456]: E0312 21:21:03.307220 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist podName:bf8a11b3-a328-47f4-8dc6-6b3dad8d256d nodeName:}" failed. No retries permitted until 2026-03-12 21:21:04.307194109 +0000 UTC m=+725.381799437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist") pod "speaker-94r48" (UID: "bf8a11b3-a328-47f4-8dc6-6b3dad8d256d") : secret "metallb-memberlist" not found Mar 12 21:21:03.388114 master-0 kubenswrapper[31456]: I0312 21:21:03.388046 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"1fcdd177c1d5e19770d326b7a6bf64a81aaefec3c0bf7eeb101f99fb96220661"} Mar 12 21:21:03.390475 master-0 kubenswrapper[31456]: I0312 21:21:03.390442 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" event={"ID":"afd19539-2c72-4f92-b25c-a1502472b3c8","Type":"ContainerStarted","Data":"bb243ad205c4b9cf3672ba09a189079a9a72b994164ca4a018a64fff1883243d"} Mar 12 21:21:03.427050 master-0 kubenswrapper[31456]: I0312 21:21:03.426987 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-vpwbb"] Mar 12 21:21:03.430548 master-0 kubenswrapper[31456]: W0312 21:21:03.430475 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56de1926_b04f_4c4f_b247_ca3f1c0303e3.slice/crio-c2a8b8aef03e40aeb6e1610b8a7052faea1ab5ae9e7f3111cbdceccba086839d WatchSource:0}: Error finding container c2a8b8aef03e40aeb6e1610b8a7052faea1ab5ae9e7f3111cbdceccba086839d: Status 404 returned error can't find the container with id c2a8b8aef03e40aeb6e1610b8a7052faea1ab5ae9e7f3111cbdceccba086839d Mar 12 21:21:04.324397 master-0 kubenswrapper[31456]: I0312 21:21:04.324335 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:04.327801 master-0 kubenswrapper[31456]: I0312 21:21:04.327759 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bf8a11b3-a328-47f4-8dc6-6b3dad8d256d-memberlist\") pod \"speaker-94r48\" (UID: \"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d\") " pod="metallb-system/speaker-94r48" Mar 12 21:21:04.399611 master-0 kubenswrapper[31456]: I0312 21:21:04.399562 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-vpwbb" event={"ID":"56de1926-b04f-4c4f-b247-ca3f1c0303e3","Type":"ContainerStarted","Data":"72538e2c66ca3fde01beed2f9cfc266c3e1f8da95828d27098596e984837d163"} Mar 12 21:21:04.399611 master-0 kubenswrapper[31456]: I0312 21:21:04.399611 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-vpwbb" event={"ID":"56de1926-b04f-4c4f-b247-ca3f1c0303e3","Type":"ContainerStarted","Data":"c2a8b8aef03e40aeb6e1610b8a7052faea1ab5ae9e7f3111cbdceccba086839d"} Mar 12 21:21:04.421821 master-0 kubenswrapper[31456]: I0312 21:21:04.421755 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-94r48" Mar 12 21:21:04.445536 master-0 kubenswrapper[31456]: W0312 21:21:04.445474 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8a11b3_a328_47f4_8dc6_6b3dad8d256d.slice/crio-814cdc2ac932cc09cb1a59d57cab5d2c52613c979d1674e55aa6ebd82017457d WatchSource:0}: Error finding container 814cdc2ac932cc09cb1a59d57cab5d2c52613c979d1674e55aa6ebd82017457d: Status 404 returned error can't find the container with id 814cdc2ac932cc09cb1a59d57cab5d2c52613c979d1674e55aa6ebd82017457d Mar 12 21:21:04.524103 master-0 kubenswrapper[31456]: I0312 21:21:04.524033 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5"] Mar 12 21:21:04.525722 master-0 kubenswrapper[31456]: I0312 21:21:04.525681 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:04.529414 master-0 kubenswrapper[31456]: I0312 21:21:04.529384 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 21:21:04.539774 master-0 kubenswrapper[31456]: I0312 21:21:04.539709 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq"] Mar 12 21:21:04.542205 master-0 kubenswrapper[31456]: I0312 21:21:04.542172 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" Mar 12 21:21:04.552147 master-0 kubenswrapper[31456]: I0312 21:21:04.552096 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5"] Mar 12 21:21:04.558241 master-0 kubenswrapper[31456]: I0312 21:21:04.558202 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq"] Mar 12 21:21:04.637423 master-0 kubenswrapper[31456]: I0312 21:21:04.635908 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4srzm"] Mar 12 21:21:04.640320 master-0 kubenswrapper[31456]: I0312 21:21:04.639780 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.645869 master-0 kubenswrapper[31456]: I0312 21:21:04.645531 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjx8\" (UniqueName: \"kubernetes.io/projected/92921db3-3f77-4a0a-bbe4-9d0cd9307179-kube-api-access-7vjx8\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:04.645869 master-0 kubenswrapper[31456]: I0312 21:21:04.645678 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92921db3-3f77-4a0a-bbe4-9d0cd9307179-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:04.645869 master-0 kubenswrapper[31456]: I0312 21:21:04.645767 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwkt\" (UniqueName: \"kubernetes.io/projected/34b80dbe-9eae-4059-8281-ea9e07b27d9a-kube-api-access-pwwkt\") pod \"nmstate-metrics-9b8c8685d-r62tq\" (UID: \"34b80dbe-9eae-4059-8281-ea9e07b27d9a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" Mar 12 21:21:04.748838 master-0 kubenswrapper[31456]: I0312 21:21:04.744411 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr"] Mar 12 21:21:04.748838 master-0 kubenswrapper[31456]: I0312 21:21:04.745606 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.755488 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.756244 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757252 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjx8\" (UniqueName: \"kubernetes.io/projected/92921db3-3f77-4a0a-bbe4-9d0cd9307179-kube-api-access-7vjx8\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757313 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk797\" (UniqueName: \"kubernetes.io/projected/87669fe7-f0c0-486c-816f-28d8198804ea-kube-api-access-wk797\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757340 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4g8s\" (UniqueName: \"kubernetes.io/projected/b620aec4-7b8c-4070-9ca5-035d00cec8f2-kube-api-access-x4g8s\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757386 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92921db3-3f77-4a0a-bbe4-9d0cd9307179-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757429 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwwkt\" (UniqueName: \"kubernetes.io/projected/34b80dbe-9eae-4059-8281-ea9e07b27d9a-kube-api-access-pwwkt\") pod \"nmstate-metrics-9b8c8685d-r62tq\" (UID: \"34b80dbe-9eae-4059-8281-ea9e07b27d9a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757465 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b620aec4-7b8c-4070-9ca5-035d00cec8f2-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757494 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-dbus-socket\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757528 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-nmstate-lock\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757564 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b620aec4-7b8c-4070-9ca5-035d00cec8f2-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: I0312 21:21:04.757603 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-ovs-socket\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: E0312 21:21:04.758500 31456 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 12 21:21:04.758922 master-0 kubenswrapper[31456]: E0312 21:21:04.758565 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92921db3-3f77-4a0a-bbe4-9d0cd9307179-tls-key-pair podName:92921db3-3f77-4a0a-bbe4-9d0cd9307179 nodeName:}" failed. No retries permitted until 2026-03-12 21:21:05.258546604 +0000 UTC m=+726.333151932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/92921db3-3f77-4a0a-bbe4-9d0cd9307179-tls-key-pair") pod "nmstate-webhook-5f558f5558-qkxb5" (UID: "92921db3-3f77-4a0a-bbe4-9d0cd9307179") : secret "openshift-nmstate-webhook" not found Mar 12 21:21:04.774605 master-0 kubenswrapper[31456]: I0312 21:21:04.764252 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr"] Mar 12 21:21:04.790491 master-0 kubenswrapper[31456]: I0312 21:21:04.790441 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjx8\" (UniqueName: \"kubernetes.io/projected/92921db3-3f77-4a0a-bbe4-9d0cd9307179-kube-api-access-7vjx8\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:04.791211 master-0 kubenswrapper[31456]: I0312 21:21:04.791170 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwwkt\" (UniqueName: \"kubernetes.io/projected/34b80dbe-9eae-4059-8281-ea9e07b27d9a-kube-api-access-pwwkt\") pod \"nmstate-metrics-9b8c8685d-r62tq\" (UID: \"34b80dbe-9eae-4059-8281-ea9e07b27d9a\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" Mar 12 21:21:04.859092 master-0 kubenswrapper[31456]: I0312 21:21:04.859017 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk797\" (UniqueName: \"kubernetes.io/projected/87669fe7-f0c0-486c-816f-28d8198804ea-kube-api-access-wk797\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.859092 master-0 kubenswrapper[31456]: I0312 21:21:04.859083 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4g8s\" (UniqueName: \"kubernetes.io/projected/b620aec4-7b8c-4070-9ca5-035d00cec8f2-kube-api-access-x4g8s\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.859627 master-0 kubenswrapper[31456]: I0312 21:21:04.859551 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b620aec4-7b8c-4070-9ca5-035d00cec8f2-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.859627 master-0 kubenswrapper[31456]: I0312 21:21:04.859605 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-dbus-socket\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.859730 master-0 kubenswrapper[31456]: I0312 21:21:04.859662 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-nmstate-lock\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.859730 master-0 kubenswrapper[31456]: I0312 21:21:04.859720 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b620aec4-7b8c-4070-9ca5-035d00cec8f2-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.859836 master-0 kubenswrapper[31456]: I0312 21:21:04.859794 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-ovs-socket\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.860076 master-0 kubenswrapper[31456]: I0312 21:21:04.860056 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-ovs-socket\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.860901 master-0 kubenswrapper[31456]: E0312 21:21:04.860259 31456 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 12 21:21:04.860901 master-0 kubenswrapper[31456]: E0312 21:21:04.860305 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b620aec4-7b8c-4070-9ca5-035d00cec8f2-plugin-serving-cert podName:b620aec4-7b8c-4070-9ca5-035d00cec8f2 nodeName:}" failed. No retries permitted until 2026-03-12 21:21:05.360292542 +0000 UTC m=+726.434897870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b620aec4-7b8c-4070-9ca5-035d00cec8f2-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-t9hmr" (UID: "b620aec4-7b8c-4070-9ca5-035d00cec8f2") : secret "plugin-serving-cert" not found Mar 12 21:21:04.860901 master-0 kubenswrapper[31456]: I0312 21:21:04.860449 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-dbus-socket\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.860901 master-0 kubenswrapper[31456]: I0312 21:21:04.860477 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/87669fe7-f0c0-486c-816f-28d8198804ea-nmstate-lock\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.861301 master-0 kubenswrapper[31456]: I0312 21:21:04.861280 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b620aec4-7b8c-4070-9ca5-035d00cec8f2-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.890008 master-0 kubenswrapper[31456]: I0312 21:21:04.889833 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4g8s\" (UniqueName: \"kubernetes.io/projected/b620aec4-7b8c-4070-9ca5-035d00cec8f2-kube-api-access-x4g8s\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:04.903510 master-0 kubenswrapper[31456]: I0312 21:21:04.903451 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk797\" (UniqueName: \"kubernetes.io/projected/87669fe7-f0c0-486c-816f-28d8198804ea-kube-api-access-wk797\") pod \"nmstate-handler-4srzm\" (UID: \"87669fe7-f0c0-486c-816f-28d8198804ea\") " pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.940153 master-0 kubenswrapper[31456]: I0312 21:21:04.938538 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" Mar 12 21:21:04.968305 master-0 kubenswrapper[31456]: I0312 21:21:04.967562 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:04.984521 master-0 kubenswrapper[31456]: I0312 21:21:04.984385 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-565bf495bc-9f7qp"] Mar 12 21:21:04.988884 master-0 kubenswrapper[31456]: I0312 21:21:04.988844 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.021838 master-0 kubenswrapper[31456]: I0312 21:21:05.020016 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-565bf495bc-9f7qp"] Mar 12 21:21:05.086401 master-0 kubenswrapper[31456]: I0312 21:21:05.086226 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-config\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.086401 master-0 kubenswrapper[31456]: I0312 21:21:05.086386 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-serving-cert\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.086401 master-0 kubenswrapper[31456]: I0312 21:21:05.086418 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-trusted-ca-bundle\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.086724 master-0 kubenswrapper[31456]: I0312 21:21:05.086466 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-service-ca\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.086724 master-0 kubenswrapper[31456]: I0312 21:21:05.086682 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-oauth-config\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.087925 master-0 kubenswrapper[31456]: I0312 21:21:05.086844 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-oauth-serving-cert\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.087925 master-0 kubenswrapper[31456]: I0312 21:21:05.086886 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhvw\" (UniqueName: \"kubernetes.io/projected/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-kube-api-access-7xhvw\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.188868 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-serving-cert\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.188928 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-trusted-ca-bundle\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.189139 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-service-ca\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.189268 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-oauth-config\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.189325 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-oauth-serving-cert\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.189354 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhvw\" (UniqueName: \"kubernetes.io/projected/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-kube-api-access-7xhvw\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.189414 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-config\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.190119 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-service-ca\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.190379 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-config\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.190723 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-trusted-ca-bundle\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.190970 master-0 kubenswrapper[31456]: I0312 21:21:05.190738 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-oauth-serving-cert\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.199961 master-0 kubenswrapper[31456]: I0312 21:21:05.196929 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-oauth-config\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.209789 master-0 kubenswrapper[31456]: I0312 21:21:05.209704 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-console-serving-cert\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.214192 master-0 kubenswrapper[31456]: I0312 21:21:05.214155 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhvw\" (UniqueName: \"kubernetes.io/projected/ffb8f0eb-9109-4b04-9f45-faec0b039fa0-kube-api-access-7xhvw\") pod \"console-565bf495bc-9f7qp\" (UID: \"ffb8f0eb-9109-4b04-9f45-faec0b039fa0\") " pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.297988 master-0 kubenswrapper[31456]: I0312 21:21:05.297931 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92921db3-3f77-4a0a-bbe4-9d0cd9307179-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:05.302405 master-0 kubenswrapper[31456]: I0312 21:21:05.302177 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/92921db3-3f77-4a0a-bbe4-9d0cd9307179-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-qkxb5\" (UID: \"92921db3-3f77-4a0a-bbe4-9d0cd9307179\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:05.345851 master-0 kubenswrapper[31456]: I0312 21:21:05.345719 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:05.400038 master-0 kubenswrapper[31456]: I0312 21:21:05.399986 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b620aec4-7b8c-4070-9ca5-035d00cec8f2-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:05.411400 master-0 kubenswrapper[31456]: I0312 21:21:05.411253 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b620aec4-7b8c-4070-9ca5-035d00cec8f2-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-t9hmr\" (UID: \"b620aec4-7b8c-4070-9ca5-035d00cec8f2\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:05.411661 master-0 kubenswrapper[31456]: I0312 21:21:05.411582 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4srzm" event={"ID":"87669fe7-f0c0-486c-816f-28d8198804ea","Type":"ContainerStarted","Data":"52baf557540c681f97cc8af1cedfe207abc684c12b87068a3b14b39d31de7ba5"} Mar 12 21:21:05.419920 master-0 kubenswrapper[31456]: I0312 21:21:05.419867 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-94r48" event={"ID":"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d","Type":"ContainerStarted","Data":"41f4285c60bbd9239303a0869d8b194c60997a870a2ca0e713caa6f8fb5b3ad5"} Mar 12 21:21:05.419920 master-0 kubenswrapper[31456]: I0312 21:21:05.419914 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-94r48" event={"ID":"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d","Type":"ContainerStarted","Data":"814cdc2ac932cc09cb1a59d57cab5d2c52613c979d1674e55aa6ebd82017457d"} Mar 12 21:21:05.494959 master-0 kubenswrapper[31456]: I0312 21:21:05.494900 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq"] Mar 12 21:21:05.523307 master-0 kubenswrapper[31456]: I0312 21:21:05.522358 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:05.692442 master-0 kubenswrapper[31456]: I0312 21:21:05.691521 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" Mar 12 21:21:05.847117 master-0 kubenswrapper[31456]: W0312 21:21:05.847061 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb8f0eb_9109_4b04_9f45_faec0b039fa0.slice/crio-cd67cc8614d7b12fd32bd8555550a2a0a92c5818058d0e42f24d518334a57c7b WatchSource:0}: Error finding container cd67cc8614d7b12fd32bd8555550a2a0a92c5818058d0e42f24d518334a57c7b: Status 404 returned error can't find the container with id cd67cc8614d7b12fd32bd8555550a2a0a92c5818058d0e42f24d518334a57c7b Mar 12 21:21:05.868309 master-0 kubenswrapper[31456]: I0312 21:21:05.868076 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-565bf495bc-9f7qp"] Mar 12 21:21:05.974093 master-0 kubenswrapper[31456]: I0312 21:21:05.973114 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5"] Mar 12 21:21:05.976742 master-0 kubenswrapper[31456]: W0312 21:21:05.976673 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92921db3_3f77_4a0a_bbe4_9d0cd9307179.slice/crio-da239446fd2640620bf90e1968df7b49d69a1597a277c80497c1fafdb1bb67e8 WatchSource:0}: Error finding container da239446fd2640620bf90e1968df7b49d69a1597a277c80497c1fafdb1bb67e8: Status 404 returned error can't find the container with id da239446fd2640620bf90e1968df7b49d69a1597a277c80497c1fafdb1bb67e8 Mar 12 21:21:06.146133 master-0 kubenswrapper[31456]: W0312 21:21:06.146068 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb620aec4_7b8c_4070_9ca5_035d00cec8f2.slice/crio-355a6eb0e4dd433e50127e3a778dffd6ba8d1233a311052b71959ccd9aeff23d WatchSource:0}: Error finding container 355a6eb0e4dd433e50127e3a778dffd6ba8d1233a311052b71959ccd9aeff23d: Status 404 returned error can't find the container with id 355a6eb0e4dd433e50127e3a778dffd6ba8d1233a311052b71959ccd9aeff23d Mar 12 21:21:06.151430 master-0 kubenswrapper[31456]: I0312 21:21:06.151321 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr"] Mar 12 21:21:06.435861 master-0 kubenswrapper[31456]: I0312 21:21:06.435713 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-565bf495bc-9f7qp" event={"ID":"ffb8f0eb-9109-4b04-9f45-faec0b039fa0","Type":"ContainerStarted","Data":"ba3442a30720daf6f4fbf0650e9a8a70dc7c7064d105dfa848c3fe047dd063eb"} Mar 12 21:21:06.435861 master-0 kubenswrapper[31456]: I0312 21:21:06.435831 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-565bf495bc-9f7qp" event={"ID":"ffb8f0eb-9109-4b04-9f45-faec0b039fa0","Type":"ContainerStarted","Data":"cd67cc8614d7b12fd32bd8555550a2a0a92c5818058d0e42f24d518334a57c7b"} Mar 12 21:21:06.440635 master-0 kubenswrapper[31456]: I0312 21:21:06.440612 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" event={"ID":"92921db3-3f77-4a0a-bbe4-9d0cd9307179","Type":"ContainerStarted","Data":"da239446fd2640620bf90e1968df7b49d69a1597a277c80497c1fafdb1bb67e8"} Mar 12 21:21:06.449167 master-0 kubenswrapper[31456]: I0312 21:21:06.444662 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" event={"ID":"b620aec4-7b8c-4070-9ca5-035d00cec8f2","Type":"ContainerStarted","Data":"355a6eb0e4dd433e50127e3a778dffd6ba8d1233a311052b71959ccd9aeff23d"} Mar 12 21:21:06.449167 master-0 kubenswrapper[31456]: I0312 21:21:06.448487 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" event={"ID":"34b80dbe-9eae-4059-8281-ea9e07b27d9a","Type":"ContainerStarted","Data":"82ae88c948a8bdb737e3d5339e2a9c562cf11184a87b87141a39c952307eee26"} Mar 12 21:21:06.492753 master-0 kubenswrapper[31456]: I0312 21:21:06.492669 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-565bf495bc-9f7qp" podStartSLOduration=2.492647398 podStartE2EDuration="2.492647398s" podCreationTimestamp="2026-03-12 21:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:21:06.48697856 +0000 UTC m=+727.561583928" watchObservedRunningTime="2026-03-12 21:21:06.492647398 +0000 UTC m=+727.567252736" Mar 12 21:21:07.474972 master-0 kubenswrapper[31456]: I0312 21:21:07.474897 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-vpwbb" event={"ID":"56de1926-b04f-4c4f-b247-ca3f1c0303e3","Type":"ContainerStarted","Data":"dde1ac83a5c1bf935d94c2e752f11760d1a5a2085a03d65d48b3a2afe17fbdd0"} Mar 12 21:21:07.476039 master-0 kubenswrapper[31456]: I0312 21:21:07.476002 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:07.483972 master-0 kubenswrapper[31456]: I0312 21:21:07.483874 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-94r48" event={"ID":"bf8a11b3-a328-47f4-8dc6-6b3dad8d256d","Type":"ContainerStarted","Data":"f06f907d644d4c8afdcff18773685a054ccd5e3c96f4734fe28bfdd7f8ae86bb"} Mar 12 21:21:07.484133 master-0 kubenswrapper[31456]: I0312 21:21:07.484091 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-94r48" Mar 12 21:21:07.533768 master-0 kubenswrapper[31456]: I0312 21:21:07.533671 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-94r48" podStartSLOduration=3.361708722 podStartE2EDuration="5.533652412s" podCreationTimestamp="2026-03-12 21:21:02 +0000 UTC" firstStartedPulling="2026-03-12 21:21:04.842010559 +0000 UTC m=+725.916615887" lastFinishedPulling="2026-03-12 21:21:07.013954249 +0000 UTC m=+728.088559577" observedRunningTime="2026-03-12 21:21:07.52287067 +0000 UTC m=+728.597476018" watchObservedRunningTime="2026-03-12 21:21:07.533652412 +0000 UTC m=+728.608257740" Mar 12 21:21:07.534867 master-0 kubenswrapper[31456]: I0312 21:21:07.534838 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-vpwbb" podStartSLOduration=2.104706769 podStartE2EDuration="5.534829651s" podCreationTimestamp="2026-03-12 21:21:02 +0000 UTC" firstStartedPulling="2026-03-12 21:21:03.581835639 +0000 UTC m=+724.656440967" lastFinishedPulling="2026-03-12 21:21:07.011958521 +0000 UTC m=+728.086563849" observedRunningTime="2026-03-12 21:21:07.504497365 +0000 UTC m=+728.579102693" watchObservedRunningTime="2026-03-12 21:21:07.534829651 +0000 UTC m=+728.609434999" Mar 12 21:21:13.556895 master-0 kubenswrapper[31456]: I0312 21:21:13.556783 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" event={"ID":"afd19539-2c72-4f92-b25c-a1502472b3c8","Type":"ContainerStarted","Data":"45eaf837f5069a0bac9f43cf66cab6fffe8e3f9a0823bc0742564361203d2f51"} Mar 12 21:21:13.557728 master-0 kubenswrapper[31456]: I0312 21:21:13.557049 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:13.559997 master-0 kubenswrapper[31456]: I0312 21:21:13.559929 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" event={"ID":"b620aec4-7b8c-4070-9ca5-035d00cec8f2","Type":"ContainerStarted","Data":"b8885eb9a00382169fdacc7236d21365e227eceafda02be48d6fd5b60d7cb3f4"} Mar 12 21:21:13.563589 master-0 kubenswrapper[31456]: I0312 21:21:13.563503 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" event={"ID":"34b80dbe-9eae-4059-8281-ea9e07b27d9a","Type":"ContainerStarted","Data":"2b495d51733d5b7c01e96257e0630cd6e2f79fb358588285997b1b1313fb9db2"} Mar 12 21:21:13.563589 master-0 kubenswrapper[31456]: I0312 21:21:13.563567 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" event={"ID":"34b80dbe-9eae-4059-8281-ea9e07b27d9a","Type":"ContainerStarted","Data":"71ebdbef035a0e0a850ec6c9616b3a977c636044e08abd7de20da627bebd5e1f"} Mar 12 21:21:13.565584 master-0 kubenswrapper[31456]: I0312 21:21:13.565549 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4srzm" event={"ID":"87669fe7-f0c0-486c-816f-28d8198804ea","Type":"ContainerStarted","Data":"05bb84937606bd0b0159149b1c61ad2c116e9811fd5866ba20b639e077ced165"} Mar 12 21:21:13.565961 master-0 kubenswrapper[31456]: I0312 21:21:13.565928 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:13.567949 master-0 kubenswrapper[31456]: I0312 21:21:13.567757 31456 generic.go:334] "Generic (PLEG): container finished" podID="d603b656-2e01-46dd-ac33-a148ec4f0bf3" containerID="a95f6823129d14fae340318a47e3409fd47035b29181e02b1f456ae04d77f3df" exitCode=0 Mar 12 21:21:13.567949 master-0 kubenswrapper[31456]: I0312 21:21:13.567859 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerDied","Data":"a95f6823129d14fae340318a47e3409fd47035b29181e02b1f456ae04d77f3df"} Mar 12 21:21:13.572164 master-0 kubenswrapper[31456]: I0312 21:21:13.571088 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" event={"ID":"92921db3-3f77-4a0a-bbe4-9d0cd9307179","Type":"ContainerStarted","Data":"73aaba676aaaedcc09445d3fd043fe7a15b00b79ea101e233b28e51684b73c45"} Mar 12 21:21:13.572164 master-0 kubenswrapper[31456]: I0312 21:21:13.571387 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:13.629837 master-0 kubenswrapper[31456]: I0312 21:21:13.626674 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4srzm" podStartSLOduration=2.153815681 podStartE2EDuration="9.626652611s" podCreationTimestamp="2026-03-12 21:21:04 +0000 UTC" firstStartedPulling="2026-03-12 21:21:05.023298715 +0000 UTC m=+726.097904043" lastFinishedPulling="2026-03-12 21:21:12.496135645 +0000 UTC m=+733.570740973" observedRunningTime="2026-03-12 21:21:13.622265275 +0000 UTC m=+734.696870633" watchObservedRunningTime="2026-03-12 21:21:13.626652611 +0000 UTC m=+734.701257939" Mar 12 21:21:13.632692 master-0 kubenswrapper[31456]: I0312 21:21:13.632599 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" podStartSLOduration=2.351018702 podStartE2EDuration="11.632581905s" podCreationTimestamp="2026-03-12 21:21:02 +0000 UTC" firstStartedPulling="2026-03-12 21:21:03.167937661 +0000 UTC m=+724.242542999" lastFinishedPulling="2026-03-12 21:21:12.449500864 +0000 UTC m=+733.524106202" observedRunningTime="2026-03-12 21:21:13.592330249 +0000 UTC m=+734.666935577" watchObservedRunningTime="2026-03-12 21:21:13.632581905 +0000 UTC m=+734.707187223" Mar 12 21:21:13.671535 master-0 kubenswrapper[31456]: I0312 21:21:13.668478 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-t9hmr" podStartSLOduration=3.365015132 podStartE2EDuration="9.668455315s" podCreationTimestamp="2026-03-12 21:21:04 +0000 UTC" firstStartedPulling="2026-03-12 21:21:06.151291619 +0000 UTC m=+727.225896947" lastFinishedPulling="2026-03-12 21:21:12.454731762 +0000 UTC m=+733.529337130" observedRunningTime="2026-03-12 21:21:13.647587209 +0000 UTC m=+734.722192567" watchObservedRunningTime="2026-03-12 21:21:13.668455315 +0000 UTC m=+734.743060653" Mar 12 21:21:13.684108 master-0 kubenswrapper[31456]: I0312 21:21:13.684027 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-r62tq" podStartSLOduration=2.737842073 podStartE2EDuration="9.684008912s" podCreationTimestamp="2026-03-12 21:21:04 +0000 UTC" firstStartedPulling="2026-03-12 21:21:05.505051847 +0000 UTC m=+726.579657175" lastFinishedPulling="2026-03-12 21:21:12.451218686 +0000 UTC m=+733.525824014" observedRunningTime="2026-03-12 21:21:13.677948716 +0000 UTC m=+734.752554054" watchObservedRunningTime="2026-03-12 21:21:13.684008912 +0000 UTC m=+734.758614250" Mar 12 21:21:13.743590 master-0 kubenswrapper[31456]: I0312 21:21:13.741497 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" podStartSLOduration=3.269870535 podStartE2EDuration="9.741477046s" podCreationTimestamp="2026-03-12 21:21:04 +0000 UTC" firstStartedPulling="2026-03-12 21:21:05.981593394 +0000 UTC m=+727.056198722" lastFinishedPulling="2026-03-12 21:21:12.453199905 +0000 UTC m=+733.527805233" observedRunningTime="2026-03-12 21:21:13.740866681 +0000 UTC m=+734.815472049" watchObservedRunningTime="2026-03-12 21:21:13.741477046 +0000 UTC m=+734.816082394" Mar 12 21:21:14.426511 master-0 kubenswrapper[31456]: I0312 21:21:14.426409 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-94r48" Mar 12 21:21:14.584211 master-0 kubenswrapper[31456]: I0312 21:21:14.584154 31456 generic.go:334] "Generic (PLEG): container finished" podID="d603b656-2e01-46dd-ac33-a148ec4f0bf3" containerID="d8d3f836b92bc02cab45fcbb298f8990bc53cf0d5d23bebe5d49487c3d65a106" exitCode=0 Mar 12 21:21:14.585706 master-0 kubenswrapper[31456]: I0312 21:21:14.585671 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerDied","Data":"d8d3f836b92bc02cab45fcbb298f8990bc53cf0d5d23bebe5d49487c3d65a106"} Mar 12 21:21:15.346803 master-0 kubenswrapper[31456]: I0312 21:21:15.346712 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:15.346803 master-0 kubenswrapper[31456]: I0312 21:21:15.346832 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:15.353108 master-0 kubenswrapper[31456]: I0312 21:21:15.353022 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:15.600857 master-0 kubenswrapper[31456]: I0312 21:21:15.600635 31456 generic.go:334] "Generic (PLEG): container finished" podID="d603b656-2e01-46dd-ac33-a148ec4f0bf3" containerID="a9345e7cd7f3c9dabb33d6c283ad47e2364dfaecf6890309287f53aab7c7d3aa" exitCode=0 Mar 12 21:21:15.600857 master-0 kubenswrapper[31456]: I0312 21:21:15.600835 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerDied","Data":"a9345e7cd7f3c9dabb33d6c283ad47e2364dfaecf6890309287f53aab7c7d3aa"} Mar 12 21:21:15.608646 master-0 kubenswrapper[31456]: I0312 21:21:15.608556 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-565bf495bc-9f7qp" Mar 12 21:21:15.743741 master-0 kubenswrapper[31456]: I0312 21:21:15.742406 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8c575f57b-cfn7b"] Mar 12 21:21:16.619262 master-0 kubenswrapper[31456]: I0312 21:21:16.619169 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"391aa1f156a5393175fd2cec72e26f77859e766be71068a265971392fdf3ec96"} Mar 12 21:21:16.619262 master-0 kubenswrapper[31456]: I0312 21:21:16.619258 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"9d7020e93667c8e8aa31fbc4c94e5f77f0dd72fbc6660f8b75dc2bd4d73e72b8"} Mar 12 21:21:16.619957 master-0 kubenswrapper[31456]: I0312 21:21:16.619276 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"91cba088bdd74a9cfef9f4bef173dbd6c07d3dfcbbcd1c0f251f4a5ebfcdce30"} Mar 12 21:21:16.619957 master-0 kubenswrapper[31456]: I0312 21:21:16.619313 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"25f01067b0202f54dc77da00f8d2aa5c54edfe11dc9a424b6463598475146427"} Mar 12 21:21:16.619957 master-0 kubenswrapper[31456]: I0312 21:21:16.619325 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"7c15452f93bbde5cc84952333008a16dc46abee688149a589175525b7f444827"} Mar 12 21:21:17.639902 master-0 kubenswrapper[31456]: I0312 21:21:17.639775 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-82tkh" event={"ID":"d603b656-2e01-46dd-ac33-a148ec4f0bf3","Type":"ContainerStarted","Data":"95381d5a2762f2ac6c4af699586515b0db0956bea6aa0cab3734dd14d00b124f"} Mar 12 21:21:17.640715 master-0 kubenswrapper[31456]: I0312 21:21:17.640155 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:17.691927 master-0 kubenswrapper[31456]: I0312 21:21:17.691795 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-82tkh" podStartSLOduration=6.21393089 podStartE2EDuration="15.691775693s" podCreationTimestamp="2026-03-12 21:21:02 +0000 UTC" firstStartedPulling="2026-03-12 21:21:02.973324202 +0000 UTC m=+724.047929530" lastFinishedPulling="2026-03-12 21:21:12.451168995 +0000 UTC m=+733.525774333" observedRunningTime="2026-03-12 21:21:17.684788833 +0000 UTC m=+738.759394171" watchObservedRunningTime="2026-03-12 21:21:17.691775693 +0000 UTC m=+738.766381031" Mar 12 21:21:17.852261 master-0 kubenswrapper[31456]: I0312 21:21:17.852126 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:17.918010 master-0 kubenswrapper[31456]: I0312 21:21:17.917868 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:20.000302 master-0 kubenswrapper[31456]: I0312 21:21:20.000236 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4srzm" Mar 12 21:21:22.748499 master-0 kubenswrapper[31456]: I0312 21:21:22.748411 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-h299r" Mar 12 21:21:22.937312 master-0 kubenswrapper[31456]: I0312 21:21:22.937230 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-vpwbb" Mar 12 21:21:25.532742 master-0 kubenswrapper[31456]: I0312 21:21:25.532644 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-qkxb5" Mar 12 21:21:30.297885 master-0 kubenswrapper[31456]: I0312 21:21:30.296985 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-5zkkl"] Mar 12 21:21:30.298871 master-0 kubenswrapper[31456]: I0312 21:21:30.298823 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.301760 master-0 kubenswrapper[31456]: I0312 21:21:30.301713 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 12 21:21:30.344852 master-0 kubenswrapper[31456]: I0312 21:21:30.340735 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-5zkkl"] Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360247 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-run-udev\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360300 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-device-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360335 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-registration-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360351 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-node-plugin-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360367 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/03958baa-1e6d-451f-a021-035961bedaf7-metrics-cert\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360384 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-lvmd-config\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360399 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-file-lock-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360424 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-csi-plugin-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360458 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-sys\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360491 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-pod-volumes-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.361846 master-0 kubenswrapper[31456]: I0312 21:21:30.360677 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlbq\" (UniqueName: \"kubernetes.io/projected/03958baa-1e6d-451f-a021-035961bedaf7-kube-api-access-xwlbq\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468340 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-sys\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468430 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-pod-volumes-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468499 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlbq\" (UniqueName: \"kubernetes.io/projected/03958baa-1e6d-451f-a021-035961bedaf7-kube-api-access-xwlbq\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468524 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-run-udev\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468543 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-device-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468576 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-registration-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468591 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-node-plugin-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468607 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/03958baa-1e6d-451f-a021-035961bedaf7-metrics-cert\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468625 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-lvmd-config\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468642 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-file-lock-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468677 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-csi-plugin-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468955 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-csi-plugin-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.468998 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-sys\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.469034 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-pod-volumes-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.469321 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-run-udev\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.469382 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-device-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.469420 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-registration-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.469863 master-0 kubenswrapper[31456]: I0312 21:21:30.469530 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-node-plugin-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.471091 master-0 kubenswrapper[31456]: I0312 21:21:30.470956 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-lvmd-config\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.474831 master-0 kubenswrapper[31456]: I0312 21:21:30.471217 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/03958baa-1e6d-451f-a021-035961bedaf7-file-lock-dir\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.476170 master-0 kubenswrapper[31456]: I0312 21:21:30.476139 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/03958baa-1e6d-451f-a021-035961bedaf7-metrics-cert\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.505686 master-0 kubenswrapper[31456]: I0312 21:21:30.503670 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlbq\" (UniqueName: \"kubernetes.io/projected/03958baa-1e6d-451f-a021-035961bedaf7-kube-api-access-xwlbq\") pod \"vg-manager-5zkkl\" (UID: \"03958baa-1e6d-451f-a021-035961bedaf7\") " pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:30.641301 master-0 kubenswrapper[31456]: I0312 21:21:30.641240 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:31.114501 master-0 kubenswrapper[31456]: I0312 21:21:31.112760 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-5zkkl"] Mar 12 21:21:31.120888 master-0 kubenswrapper[31456]: W0312 21:21:31.120834 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03958baa_1e6d_451f_a021_035961bedaf7.slice/crio-f3926edee4e0c8da6eff2b2b1ff755da8260a15a4bdec1dfd76e33263a5d0d7b WatchSource:0}: Error finding container f3926edee4e0c8da6eff2b2b1ff755da8260a15a4bdec1dfd76e33263a5d0d7b: Status 404 returned error can't find the container with id f3926edee4e0c8da6eff2b2b1ff755da8260a15a4bdec1dfd76e33263a5d0d7b Mar 12 21:21:31.797168 master-0 kubenswrapper[31456]: I0312 21:21:31.797108 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5zkkl" event={"ID":"03958baa-1e6d-451f-a021-035961bedaf7","Type":"ContainerStarted","Data":"d20860909e3684522b32a9bbef722916be3a0f5187018eace36b5db6c836e291"} Mar 12 21:21:31.797168 master-0 kubenswrapper[31456]: I0312 21:21:31.797161 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5zkkl" event={"ID":"03958baa-1e6d-451f-a021-035961bedaf7","Type":"ContainerStarted","Data":"f3926edee4e0c8da6eff2b2b1ff755da8260a15a4bdec1dfd76e33263a5d0d7b"} Mar 12 21:21:31.832352 master-0 kubenswrapper[31456]: I0312 21:21:31.832260 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-5zkkl" podStartSLOduration=1.832237479 podStartE2EDuration="1.832237479s" podCreationTimestamp="2026-03-12 21:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:21:31.824239564 +0000 UTC m=+752.898844932" watchObservedRunningTime="2026-03-12 21:21:31.832237479 +0000 UTC m=+752.906842817" Mar 12 21:21:32.873013 master-0 kubenswrapper[31456]: I0312 21:21:32.872649 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-82tkh" Mar 12 21:21:33.820835 master-0 kubenswrapper[31456]: I0312 21:21:33.820733 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-5zkkl_03958baa-1e6d-451f-a021-035961bedaf7/vg-manager/0.log" Mar 12 21:21:33.821119 master-0 kubenswrapper[31456]: I0312 21:21:33.820871 31456 generic.go:334] "Generic (PLEG): container finished" podID="03958baa-1e6d-451f-a021-035961bedaf7" containerID="d20860909e3684522b32a9bbef722916be3a0f5187018eace36b5db6c836e291" exitCode=1 Mar 12 21:21:33.821119 master-0 kubenswrapper[31456]: I0312 21:21:33.820919 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5zkkl" event={"ID":"03958baa-1e6d-451f-a021-035961bedaf7","Type":"ContainerDied","Data":"d20860909e3684522b32a9bbef722916be3a0f5187018eace36b5db6c836e291"} Mar 12 21:21:33.821834 master-0 kubenswrapper[31456]: I0312 21:21:33.821688 31456 scope.go:117] "RemoveContainer" containerID="d20860909e3684522b32a9bbef722916be3a0f5187018eace36b5db6c836e291" Mar 12 21:21:34.192915 master-0 kubenswrapper[31456]: I0312 21:21:34.192620 31456 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 12 21:21:34.706123 master-0 kubenswrapper[31456]: I0312 21:21:34.705931 31456 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-12T21:21:34.192649529Z","Handler":null,"Name":""} Mar 12 21:21:34.720712 master-0 kubenswrapper[31456]: I0312 21:21:34.720609 31456 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 12 21:21:34.720712 master-0 kubenswrapper[31456]: I0312 21:21:34.720694 31456 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 12 21:21:34.836888 master-0 kubenswrapper[31456]: I0312 21:21:34.835658 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-5zkkl_03958baa-1e6d-451f-a021-035961bedaf7/vg-manager/0.log" Mar 12 21:21:34.836888 master-0 kubenswrapper[31456]: I0312 21:21:34.835738 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-5zkkl" event={"ID":"03958baa-1e6d-451f-a021-035961bedaf7","Type":"ContainerStarted","Data":"23235bab26f20b143cf588ff8a8e3683cd7ee52a767e80ddf0e7dc9d58af2f38"} Mar 12 21:21:37.221156 master-0 kubenswrapper[31456]: I0312 21:21:37.221067 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qr8kc"] Mar 12 21:21:37.222162 master-0 kubenswrapper[31456]: I0312 21:21:37.222138 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:37.223739 master-0 kubenswrapper[31456]: I0312 21:21:37.223694 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 21:21:37.224008 master-0 kubenswrapper[31456]: I0312 21:21:37.223986 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 21:21:37.246149 master-0 kubenswrapper[31456]: I0312 21:21:37.246100 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qr8kc"] Mar 12 21:21:37.409315 master-0 kubenswrapper[31456]: I0312 21:21:37.409238 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75r5l\" (UniqueName: \"kubernetes.io/projected/1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb-kube-api-access-75r5l\") pod \"openstack-operator-index-qr8kc\" (UID: \"1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb\") " pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:37.513617 master-0 kubenswrapper[31456]: I0312 21:21:37.513481 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75r5l\" (UniqueName: \"kubernetes.io/projected/1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb-kube-api-access-75r5l\") pod \"openstack-operator-index-qr8kc\" (UID: \"1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb\") " pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:37.532778 master-0 kubenswrapper[31456]: I0312 21:21:37.532711 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75r5l\" (UniqueName: \"kubernetes.io/projected/1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb-kube-api-access-75r5l\") pod \"openstack-operator-index-qr8kc\" (UID: \"1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb\") " pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:37.547219 master-0 kubenswrapper[31456]: I0312 21:21:37.547158 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:38.194770 master-0 kubenswrapper[31456]: I0312 21:21:38.191006 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qr8kc"] Mar 12 21:21:38.198823 master-0 kubenswrapper[31456]: W0312 21:21:38.198663 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c0a8bd8_c7bd_44d9_a164_31e8a9eabffb.slice/crio-779c60836d814f0e191fe8815ad60fec6dfd4a2cc8c6f1fdc228ea3b0d66a8b1 WatchSource:0}: Error finding container 779c60836d814f0e191fe8815ad60fec6dfd4a2cc8c6f1fdc228ea3b0d66a8b1: Status 404 returned error can't find the container with id 779c60836d814f0e191fe8815ad60fec6dfd4a2cc8c6f1fdc228ea3b0d66a8b1 Mar 12 21:21:38.894266 master-0 kubenswrapper[31456]: I0312 21:21:38.894191 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qr8kc" event={"ID":"1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb","Type":"ContainerStarted","Data":"779c60836d814f0e191fe8815ad60fec6dfd4a2cc8c6f1fdc228ea3b0d66a8b1"} Mar 12 21:21:39.903594 master-0 kubenswrapper[31456]: I0312 21:21:39.903535 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qr8kc" event={"ID":"1c0a8bd8-c7bd-44d9-a164-31e8a9eabffb","Type":"ContainerStarted","Data":"154fbe23ca3f43dbefdee3de3f6125f9159289579da9b511f313738037d17368"} Mar 12 21:21:39.933580 master-0 kubenswrapper[31456]: I0312 21:21:39.933450 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qr8kc" podStartSLOduration=1.8910797700000002 podStartE2EDuration="2.933421598s" podCreationTimestamp="2026-03-12 21:21:37 +0000 UTC" firstStartedPulling="2026-03-12 21:21:38.203669529 +0000 UTC m=+759.278274857" lastFinishedPulling="2026-03-12 21:21:39.246011347 +0000 UTC m=+760.320616685" observedRunningTime="2026-03-12 21:21:39.923477526 +0000 UTC m=+760.998082894" watchObservedRunningTime="2026-03-12 21:21:39.933421598 +0000 UTC m=+761.008026946" Mar 12 21:21:40.642847 master-0 kubenswrapper[31456]: I0312 21:21:40.642138 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:40.646831 master-0 kubenswrapper[31456]: I0312 21:21:40.645979 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:40.828455 master-0 kubenswrapper[31456]: I0312 21:21:40.828270 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-8c575f57b-cfn7b" podUID="e78ecfdd-d8f5-4164-8300-05df372d0c8c" containerName="console" containerID="cri-o://f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69" gracePeriod=15 Mar 12 21:21:40.911903 master-0 kubenswrapper[31456]: I0312 21:21:40.911698 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:40.913569 master-0 kubenswrapper[31456]: I0312 21:21:40.913516 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-5zkkl" Mar 12 21:21:41.336005 master-0 kubenswrapper[31456]: I0312 21:21:41.335942 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8c575f57b-cfn7b_e78ecfdd-d8f5-4164-8300-05df372d0c8c/console/0.log" Mar 12 21:21:41.336263 master-0 kubenswrapper[31456]: I0312 21:21:41.336043 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:21:41.524436 master-0 kubenswrapper[31456]: I0312 21:21:41.524352 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-oauth-config\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.524436 master-0 kubenswrapper[31456]: I0312 21:21:41.524438 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpfst\" (UniqueName: \"kubernetes.io/projected/e78ecfdd-d8f5-4164-8300-05df372d0c8c-kube-api-access-cpfst\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.524860 master-0 kubenswrapper[31456]: I0312 21:21:41.524502 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-config\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.524860 master-0 kubenswrapper[31456]: I0312 21:21:41.524557 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-trusted-ca-bundle\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.524860 master-0 kubenswrapper[31456]: I0312 21:21:41.524587 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-oauth-serving-cert\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.525156 master-0 kubenswrapper[31456]: I0312 21:21:41.525057 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-config" (OuterVolumeSpecName: "console-config") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:21:41.525263 master-0 kubenswrapper[31456]: I0312 21:21:41.525224 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-serving-cert\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.525335 master-0 kubenswrapper[31456]: I0312 21:21:41.525295 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:21:41.525335 master-0 kubenswrapper[31456]: I0312 21:21:41.525325 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-service-ca\") pod \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\" (UID: \"e78ecfdd-d8f5-4164-8300-05df372d0c8c\") " Mar 12 21:21:41.525958 master-0 kubenswrapper[31456]: I0312 21:21:41.525301 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:21:41.525958 master-0 kubenswrapper[31456]: I0312 21:21:41.525923 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-service-ca" (OuterVolumeSpecName: "service-ca") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:21:41.526147 master-0 kubenswrapper[31456]: I0312 21:21:41.526108 31456 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.526147 master-0 kubenswrapper[31456]: I0312 21:21:41.526122 31456 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.526147 master-0 kubenswrapper[31456]: I0312 21:21:41.526131 31456 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.526147 master-0 kubenswrapper[31456]: I0312 21:21:41.526140 31456 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.527552 master-0 kubenswrapper[31456]: I0312 21:21:41.527487 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78ecfdd-d8f5-4164-8300-05df372d0c8c-kube-api-access-cpfst" (OuterVolumeSpecName: "kube-api-access-cpfst") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "kube-api-access-cpfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:21:41.528851 master-0 kubenswrapper[31456]: I0312 21:21:41.528784 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:21:41.530524 master-0 kubenswrapper[31456]: I0312 21:21:41.530467 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e78ecfdd-d8f5-4164-8300-05df372d0c8c" (UID: "e78ecfdd-d8f5-4164-8300-05df372d0c8c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:21:41.628162 master-0 kubenswrapper[31456]: I0312 21:21:41.628073 31456 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.628162 master-0 kubenswrapper[31456]: I0312 21:21:41.628114 31456 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e78ecfdd-d8f5-4164-8300-05df372d0c8c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.628162 master-0 kubenswrapper[31456]: I0312 21:21:41.628125 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpfst\" (UniqueName: \"kubernetes.io/projected/e78ecfdd-d8f5-4164-8300-05df372d0c8c-kube-api-access-cpfst\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:41.935643 master-0 kubenswrapper[31456]: I0312 21:21:41.935564 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8c575f57b-cfn7b_e78ecfdd-d8f5-4164-8300-05df372d0c8c/console/0.log" Mar 12 21:21:41.936658 master-0 kubenswrapper[31456]: I0312 21:21:41.935662 31456 generic.go:334] "Generic (PLEG): container finished" podID="e78ecfdd-d8f5-4164-8300-05df372d0c8c" containerID="f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69" exitCode=2 Mar 12 21:21:41.936753 master-0 kubenswrapper[31456]: I0312 21:21:41.936646 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8c575f57b-cfn7b" Mar 12 21:21:41.937440 master-0 kubenswrapper[31456]: I0312 21:21:41.937382 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c575f57b-cfn7b" event={"ID":"e78ecfdd-d8f5-4164-8300-05df372d0c8c","Type":"ContainerDied","Data":"f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69"} Mar 12 21:21:41.937770 master-0 kubenswrapper[31456]: I0312 21:21:41.937443 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8c575f57b-cfn7b" event={"ID":"e78ecfdd-d8f5-4164-8300-05df372d0c8c","Type":"ContainerDied","Data":"52dff62e330ed7c65cfc4102f1d2afdcf62202c011b570428629a6c3e938b8f5"} Mar 12 21:21:41.937770 master-0 kubenswrapper[31456]: I0312 21:21:41.937504 31456 scope.go:117] "RemoveContainer" containerID="f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69" Mar 12 21:21:41.971950 master-0 kubenswrapper[31456]: I0312 21:21:41.971891 31456 scope.go:117] "RemoveContainer" containerID="f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69" Mar 12 21:21:41.972707 master-0 kubenswrapper[31456]: E0312 21:21:41.972623 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69\": container with ID starting with f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69 not found: ID does not exist" containerID="f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69" Mar 12 21:21:41.972707 master-0 kubenswrapper[31456]: I0312 21:21:41.972686 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69"} err="failed to get container status \"f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69\": rpc error: code = NotFound desc = could not find container \"f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69\": container with ID starting with f41e4f3ad033a8fd782c8f2bdd66cfe8536a942f5749c9997d2152240f996f69 not found: ID does not exist" Mar 12 21:21:41.997425 master-0 kubenswrapper[31456]: I0312 21:21:41.997307 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8c575f57b-cfn7b"] Mar 12 21:21:42.005793 master-0 kubenswrapper[31456]: I0312 21:21:42.005734 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-8c575f57b-cfn7b"] Mar 12 21:21:43.191492 master-0 kubenswrapper[31456]: I0312 21:21:43.191402 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78ecfdd-d8f5-4164-8300-05df372d0c8c" path="/var/lib/kubelet/pods/e78ecfdd-d8f5-4164-8300-05df372d0c8c/volumes" Mar 12 21:21:47.547676 master-0 kubenswrapper[31456]: I0312 21:21:47.547578 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:47.548798 master-0 kubenswrapper[31456]: I0312 21:21:47.548707 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:47.601147 master-0 kubenswrapper[31456]: I0312 21:21:47.601042 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:48.050303 master-0 kubenswrapper[31456]: I0312 21:21:48.050232 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qr8kc" Mar 12 21:21:49.619083 master-0 kubenswrapper[31456]: I0312 21:21:49.619017 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb"] Mar 12 21:21:49.619860 master-0 kubenswrapper[31456]: E0312 21:21:49.619450 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78ecfdd-d8f5-4164-8300-05df372d0c8c" containerName="console" Mar 12 21:21:49.619860 master-0 kubenswrapper[31456]: I0312 21:21:49.619466 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78ecfdd-d8f5-4164-8300-05df372d0c8c" containerName="console" Mar 12 21:21:49.619860 master-0 kubenswrapper[31456]: I0312 21:21:49.619691 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78ecfdd-d8f5-4164-8300-05df372d0c8c" containerName="console" Mar 12 21:21:49.621165 master-0 kubenswrapper[31456]: I0312 21:21:49.621131 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.670290 master-0 kubenswrapper[31456]: I0312 21:21:49.652077 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb"] Mar 12 21:21:49.720491 master-0 kubenswrapper[31456]: I0312 21:21:49.720425 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.720718 master-0 kubenswrapper[31456]: I0312 21:21:49.720523 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.720718 master-0 kubenswrapper[31456]: I0312 21:21:49.720652 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs76r\" (UniqueName: \"kubernetes.io/projected/553e6c58-52f2-4bfe-8146-322d2a7b00af-kube-api-access-xs76r\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.822831 master-0 kubenswrapper[31456]: I0312 21:21:49.822742 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs76r\" (UniqueName: \"kubernetes.io/projected/553e6c58-52f2-4bfe-8146-322d2a7b00af-kube-api-access-xs76r\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.823067 master-0 kubenswrapper[31456]: I0312 21:21:49.823025 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.823153 master-0 kubenswrapper[31456]: I0312 21:21:49.823124 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.823781 master-0 kubenswrapper[31456]: I0312 21:21:49.823740 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-bundle\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.824030 master-0 kubenswrapper[31456]: I0312 21:21:49.823973 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-util\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.848915 master-0 kubenswrapper[31456]: I0312 21:21:49.846864 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs76r\" (UniqueName: \"kubernetes.io/projected/553e6c58-52f2-4bfe-8146-322d2a7b00af-kube-api-access-xs76r\") pod \"f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:49.974037 master-0 kubenswrapper[31456]: I0312 21:21:49.973796 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:50.319754 master-0 kubenswrapper[31456]: I0312 21:21:50.319674 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb"] Mar 12 21:21:51.048970 master-0 kubenswrapper[31456]: I0312 21:21:51.048786 31456 generic.go:334] "Generic (PLEG): container finished" podID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerID="96aa57b90fdc184c1f687f3a74e5300f9b1e0d703bb4ab30ab151c3f982d4c59" exitCode=0 Mar 12 21:21:51.049866 master-0 kubenswrapper[31456]: I0312 21:21:51.048951 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" event={"ID":"553e6c58-52f2-4bfe-8146-322d2a7b00af","Type":"ContainerDied","Data":"96aa57b90fdc184c1f687f3a74e5300f9b1e0d703bb4ab30ab151c3f982d4c59"} Mar 12 21:21:51.049866 master-0 kubenswrapper[31456]: I0312 21:21:51.049033 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" event={"ID":"553e6c58-52f2-4bfe-8146-322d2a7b00af","Type":"ContainerStarted","Data":"1f958a7863727e75dc5d8b454d639f19c6ccee9c3b09d27d48dc66479a2fe3c8"} Mar 12 21:21:53.080444 master-0 kubenswrapper[31456]: I0312 21:21:53.080348 31456 generic.go:334] "Generic (PLEG): container finished" podID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerID="c6601f5ae854e21187b70723c3b62ea07a4aa201e7d75149f324692993fb74db" exitCode=0 Mar 12 21:21:53.080444 master-0 kubenswrapper[31456]: I0312 21:21:53.080405 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" event={"ID":"553e6c58-52f2-4bfe-8146-322d2a7b00af","Type":"ContainerDied","Data":"c6601f5ae854e21187b70723c3b62ea07a4aa201e7d75149f324692993fb74db"} Mar 12 21:21:54.093911 master-0 kubenswrapper[31456]: I0312 21:21:54.093836 31456 generic.go:334] "Generic (PLEG): container finished" podID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerID="c800e0d9e4d152511b22edbc9cd2ab059ac06e45f4458e692e0412cdf6f1a6a7" exitCode=0 Mar 12 21:21:54.094409 master-0 kubenswrapper[31456]: I0312 21:21:54.093898 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" event={"ID":"553e6c58-52f2-4bfe-8146-322d2a7b00af","Type":"ContainerDied","Data":"c800e0d9e4d152511b22edbc9cd2ab059ac06e45f4458e692e0412cdf6f1a6a7"} Mar 12 21:21:55.497632 master-0 kubenswrapper[31456]: I0312 21:21:55.497564 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:21:55.538181 master-0 kubenswrapper[31456]: I0312 21:21:55.538076 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-util\") pod \"553e6c58-52f2-4bfe-8146-322d2a7b00af\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " Mar 12 21:21:55.538444 master-0 kubenswrapper[31456]: I0312 21:21:55.538278 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-bundle\") pod \"553e6c58-52f2-4bfe-8146-322d2a7b00af\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " Mar 12 21:21:55.538444 master-0 kubenswrapper[31456]: I0312 21:21:55.538401 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs76r\" (UniqueName: \"kubernetes.io/projected/553e6c58-52f2-4bfe-8146-322d2a7b00af-kube-api-access-xs76r\") pod \"553e6c58-52f2-4bfe-8146-322d2a7b00af\" (UID: \"553e6c58-52f2-4bfe-8146-322d2a7b00af\") " Mar 12 21:21:55.542974 master-0 kubenswrapper[31456]: I0312 21:21:55.542714 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-bundle" (OuterVolumeSpecName: "bundle") pod "553e6c58-52f2-4bfe-8146-322d2a7b00af" (UID: "553e6c58-52f2-4bfe-8146-322d2a7b00af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:21:55.546280 master-0 kubenswrapper[31456]: I0312 21:21:55.546147 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553e6c58-52f2-4bfe-8146-322d2a7b00af-kube-api-access-xs76r" (OuterVolumeSpecName: "kube-api-access-xs76r") pod "553e6c58-52f2-4bfe-8146-322d2a7b00af" (UID: "553e6c58-52f2-4bfe-8146-322d2a7b00af"). InnerVolumeSpecName "kube-api-access-xs76r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:21:55.555140 master-0 kubenswrapper[31456]: I0312 21:21:55.555030 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-util" (OuterVolumeSpecName: "util") pod "553e6c58-52f2-4bfe-8146-322d2a7b00af" (UID: "553e6c58-52f2-4bfe-8146-322d2a7b00af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:21:55.641342 master-0 kubenswrapper[31456]: I0312 21:21:55.641269 31456 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-util\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:55.641342 master-0 kubenswrapper[31456]: I0312 21:21:55.641324 31456 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/553e6c58-52f2-4bfe-8146-322d2a7b00af-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:55.641342 master-0 kubenswrapper[31456]: I0312 21:21:55.641339 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs76r\" (UniqueName: \"kubernetes.io/projected/553e6c58-52f2-4bfe-8146-322d2a7b00af-kube-api-access-xs76r\") on node \"master-0\" DevicePath \"\"" Mar 12 21:21:56.147825 master-0 kubenswrapper[31456]: I0312 21:21:56.147690 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" event={"ID":"553e6c58-52f2-4bfe-8146-322d2a7b00af","Type":"ContainerDied","Data":"1f958a7863727e75dc5d8b454d639f19c6ccee9c3b09d27d48dc66479a2fe3c8"} Mar 12 21:21:56.148174 master-0 kubenswrapper[31456]: I0312 21:21:56.147782 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f958a7863727e75dc5d8b454d639f19c6ccee9c3b09d27d48dc66479a2fe3c8" Mar 12 21:21:56.148174 master-0 kubenswrapper[31456]: I0312 21:21:56.147875 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f9f18d30af743f52483ac2b056c423e2f043de5970b22bfcfee7015477g58bb" Mar 12 21:22:02.391700 master-0 kubenswrapper[31456]: I0312 21:22:02.391621 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn"] Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: E0312 21:22:02.392099 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="extract" Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: I0312 21:22:02.392118 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="extract" Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: E0312 21:22:02.392149 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="pull" Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: I0312 21:22:02.392157 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="pull" Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: E0312 21:22:02.392171 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="util" Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: I0312 21:22:02.392181 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="util" Mar 12 21:22:02.392536 master-0 kubenswrapper[31456]: I0312 21:22:02.392380 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="553e6c58-52f2-4bfe-8146-322d2a7b00af" containerName="extract" Mar 12 21:22:02.393135 master-0 kubenswrapper[31456]: I0312 21:22:02.393103 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:02.429724 master-0 kubenswrapper[31456]: I0312 21:22:02.429658 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn"] Mar 12 21:22:02.479897 master-0 kubenswrapper[31456]: I0312 21:22:02.479804 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8ms\" (UniqueName: \"kubernetes.io/projected/53186a18-5956-4a12-95f4-3ef164196c99-kube-api-access-5l8ms\") pod \"openstack-operator-controller-init-65b9994cf8-2wrpn\" (UID: \"53186a18-5956-4a12-95f4-3ef164196c99\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:02.581789 master-0 kubenswrapper[31456]: I0312 21:22:02.581718 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l8ms\" (UniqueName: \"kubernetes.io/projected/53186a18-5956-4a12-95f4-3ef164196c99-kube-api-access-5l8ms\") pod \"openstack-operator-controller-init-65b9994cf8-2wrpn\" (UID: \"53186a18-5956-4a12-95f4-3ef164196c99\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:02.601373 master-0 kubenswrapper[31456]: I0312 21:22:02.601317 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l8ms\" (UniqueName: \"kubernetes.io/projected/53186a18-5956-4a12-95f4-3ef164196c99-kube-api-access-5l8ms\") pod \"openstack-operator-controller-init-65b9994cf8-2wrpn\" (UID: \"53186a18-5956-4a12-95f4-3ef164196c99\") " pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:02.709238 master-0 kubenswrapper[31456]: I0312 21:22:02.709128 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:03.224348 master-0 kubenswrapper[31456]: I0312 21:22:03.224302 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn"] Mar 12 21:22:03.238913 master-0 kubenswrapper[31456]: I0312 21:22:03.238793 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" event={"ID":"53186a18-5956-4a12-95f4-3ef164196c99","Type":"ContainerStarted","Data":"d9cdd0f1a2ca33f100cbaf94ef7dddc7033118e08bead914c579dbf150009e1a"} Mar 12 21:22:09.359957 master-0 kubenswrapper[31456]: I0312 21:22:09.359867 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" event={"ID":"53186a18-5956-4a12-95f4-3ef164196c99","Type":"ContainerStarted","Data":"d2ca201a22ad2bc6b4e1bbb9ce38b63cc9f87de9e865743358e04b5a423b1137"} Mar 12 21:22:09.360489 master-0 kubenswrapper[31456]: I0312 21:22:09.360082 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:09.401175 master-0 kubenswrapper[31456]: I0312 21:22:09.401090 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" podStartSLOduration=2.336767815 podStartE2EDuration="7.401074287s" podCreationTimestamp="2026-03-12 21:22:02 +0000 UTC" firstStartedPulling="2026-03-12 21:22:03.217258496 +0000 UTC m=+784.291863824" lastFinishedPulling="2026-03-12 21:22:08.281564978 +0000 UTC m=+789.356170296" observedRunningTime="2026-03-12 21:22:09.394801765 +0000 UTC m=+790.469407103" watchObservedRunningTime="2026-03-12 21:22:09.401074287 +0000 UTC m=+790.475679615" Mar 12 21:22:22.713953 master-0 kubenswrapper[31456]: I0312 21:22:22.713798 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65b9994cf8-2wrpn" Mar 12 21:22:44.318052 master-0 kubenswrapper[31456]: I0312 21:22:44.317968 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn"] Mar 12 21:22:44.319141 master-0 kubenswrapper[31456]: I0312 21:22:44.319106 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:22:44.325682 master-0 kubenswrapper[31456]: I0312 21:22:44.325630 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t"] Mar 12 21:22:44.326892 master-0 kubenswrapper[31456]: I0312 21:22:44.326767 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:22:44.351683 master-0 kubenswrapper[31456]: I0312 21:22:44.343933 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn"] Mar 12 21:22:44.368399 master-0 kubenswrapper[31456]: I0312 21:22:44.368353 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t"] Mar 12 21:22:44.384736 master-0 kubenswrapper[31456]: I0312 21:22:44.377008 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv"] Mar 12 21:22:44.384736 master-0 kubenswrapper[31456]: I0312 21:22:44.379624 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmvt\" (UniqueName: \"kubernetes.io/projected/db5b5da6-7eaa-4e23-ad31-7e977fd52810-kube-api-access-4fmvt\") pod \"barbican-operator-controller-manager-677bd678f7-4q2jn\" (UID: \"db5b5da6-7eaa-4e23-ad31-7e977fd52810\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:22:44.384736 master-0 kubenswrapper[31456]: I0312 21:22:44.379701 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz569\" (UniqueName: \"kubernetes.io/projected/072c42ef-c704-4430-ae96-ba686e7a9e48-kube-api-access-nz569\") pod \"cinder-operator-controller-manager-984cd4dcf-d2l7t\" (UID: \"072c42ef-c704-4430-ae96-ba686e7a9e48\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:22:44.384736 master-0 kubenswrapper[31456]: I0312 21:22:44.380236 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:22:44.393920 master-0 kubenswrapper[31456]: I0312 21:22:44.393770 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv"] Mar 12 21:22:44.423349 master-0 kubenswrapper[31456]: I0312 21:22:44.415920 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q"] Mar 12 21:22:44.423349 master-0 kubenswrapper[31456]: I0312 21:22:44.417220 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:22:44.430609 master-0 kubenswrapper[31456]: I0312 21:22:44.428388 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q"] Mar 12 21:22:44.458121 master-0 kubenswrapper[31456]: I0312 21:22:44.457484 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55"] Mar 12 21:22:44.458121 master-0 kubenswrapper[31456]: I0312 21:22:44.458658 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:22:44.479770 master-0 kubenswrapper[31456]: I0312 21:22:44.474237 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn"] Mar 12 21:22:44.479770 master-0 kubenswrapper[31456]: I0312 21:22:44.475406 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:22:44.482018 master-0 kubenswrapper[31456]: I0312 21:22:44.481982 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmvt\" (UniqueName: \"kubernetes.io/projected/db5b5da6-7eaa-4e23-ad31-7e977fd52810-kube-api-access-4fmvt\") pod \"barbican-operator-controller-manager-677bd678f7-4q2jn\" (UID: \"db5b5da6-7eaa-4e23-ad31-7e977fd52810\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:22:44.482153 master-0 kubenswrapper[31456]: I0312 21:22:44.482136 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brwx\" (UniqueName: \"kubernetes.io/projected/6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3-kube-api-access-5brwx\") pod \"glance-operator-controller-manager-5964f64c48-9qk2q\" (UID: \"6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:22:44.482272 master-0 kubenswrapper[31456]: I0312 21:22:44.482257 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz569\" (UniqueName: \"kubernetes.io/projected/072c42ef-c704-4430-ae96-ba686e7a9e48-kube-api-access-nz569\") pod \"cinder-operator-controller-manager-984cd4dcf-d2l7t\" (UID: \"072c42ef-c704-4430-ae96-ba686e7a9e48\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:22:44.482387 master-0 kubenswrapper[31456]: I0312 21:22:44.482373 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq9b2\" (UniqueName: \"kubernetes.io/projected/b981b7c7-773d-4c60-a591-3e6fbb6fdacd-kube-api-access-dq9b2\") pod \"designate-operator-controller-manager-66d56f6ff4-l57sv\" (UID: \"b981b7c7-773d-4c60-a591-3e6fbb6fdacd\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:22:44.482485 master-0 kubenswrapper[31456]: I0312 21:22:44.482473 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tqx\" (UniqueName: \"kubernetes.io/projected/f083454e-bbf9-4d06-b277-0303cfe15c31-kube-api-access-s5tqx\") pod \"heat-operator-controller-manager-77b6666d85-dqp55\" (UID: \"f083454e-bbf9-4d06-b277-0303cfe15c31\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:22:44.487758 master-0 kubenswrapper[31456]: I0312 21:22:44.487501 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55"] Mar 12 21:22:44.506329 master-0 kubenswrapper[31456]: I0312 21:22:44.499792 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn"] Mar 12 21:22:44.526182 master-0 kubenswrapper[31456]: I0312 21:22:44.516967 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz569\" (UniqueName: \"kubernetes.io/projected/072c42ef-c704-4430-ae96-ba686e7a9e48-kube-api-access-nz569\") pod \"cinder-operator-controller-manager-984cd4dcf-d2l7t\" (UID: \"072c42ef-c704-4430-ae96-ba686e7a9e48\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:22:44.526182 master-0 kubenswrapper[31456]: I0312 21:22:44.521279 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv"] Mar 12 21:22:44.526182 master-0 kubenswrapper[31456]: I0312 21:22:44.523442 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:44.526870 master-0 kubenswrapper[31456]: I0312 21:22:44.526826 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 21:22:44.534479 master-0 kubenswrapper[31456]: I0312 21:22:44.534417 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv"] Mar 12 21:22:44.534652 master-0 kubenswrapper[31456]: I0312 21:22:44.534615 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmvt\" (UniqueName: \"kubernetes.io/projected/db5b5da6-7eaa-4e23-ad31-7e977fd52810-kube-api-access-4fmvt\") pod \"barbican-operator-controller-manager-677bd678f7-4q2jn\" (UID: \"db5b5da6-7eaa-4e23-ad31-7e977fd52810\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:22:44.558974 master-0 kubenswrapper[31456]: I0312 21:22:44.558905 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2"] Mar 12 21:22:44.560117 master-0 kubenswrapper[31456]: I0312 21:22:44.560094 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:22:44.578501 master-0 kubenswrapper[31456]: I0312 21:22:44.574540 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2"] Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586292 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rd5\" (UniqueName: \"kubernetes.io/projected/392ee3fe-88fb-47f2-834a-115559661320-kube-api-access-99rd5\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586374 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nqrk\" (UniqueName: \"kubernetes.io/projected/fee2ceac-7ca8-416a-a8aa-e80cc6b37755-kube-api-access-7nqrk\") pod \"horizon-operator-controller-manager-6d9d6b584d-xc2vn\" (UID: \"fee2ceac-7ca8-416a-a8aa-e80cc6b37755\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586421 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d79w\" (UniqueName: \"kubernetes.io/projected/82f17f4c-c741-4cc8-8b68-c26ca155288d-kube-api-access-9d79w\") pod \"ironic-operator-controller-manager-6bbb499bbc-29sh2\" (UID: \"82f17f4c-c741-4cc8-8b68-c26ca155288d\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586443 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586475 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brwx\" (UniqueName: \"kubernetes.io/projected/6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3-kube-api-access-5brwx\") pod \"glance-operator-controller-manager-5964f64c48-9qk2q\" (UID: \"6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586552 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq9b2\" (UniqueName: \"kubernetes.io/projected/b981b7c7-773d-4c60-a591-3e6fbb6fdacd-kube-api-access-dq9b2\") pod \"designate-operator-controller-manager-66d56f6ff4-l57sv\" (UID: \"b981b7c7-773d-4c60-a591-3e6fbb6fdacd\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:22:44.596837 master-0 kubenswrapper[31456]: I0312 21:22:44.586572 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tqx\" (UniqueName: \"kubernetes.io/projected/f083454e-bbf9-4d06-b277-0303cfe15c31-kube-api-access-s5tqx\") pod \"heat-operator-controller-manager-77b6666d85-dqp55\" (UID: \"f083454e-bbf9-4d06-b277-0303cfe15c31\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:22:44.614378 master-0 kubenswrapper[31456]: I0312 21:22:44.612471 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579"] Mar 12 21:22:44.623474 master-0 kubenswrapper[31456]: I0312 21:22:44.619402 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:22:44.635310 master-0 kubenswrapper[31456]: I0312 21:22:44.631595 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq9b2\" (UniqueName: \"kubernetes.io/projected/b981b7c7-773d-4c60-a591-3e6fbb6fdacd-kube-api-access-dq9b2\") pod \"designate-operator-controller-manager-66d56f6ff4-l57sv\" (UID: \"b981b7c7-773d-4c60-a591-3e6fbb6fdacd\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:22:44.641568 master-0 kubenswrapper[31456]: I0312 21:22:44.641410 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tqx\" (UniqueName: \"kubernetes.io/projected/f083454e-bbf9-4d06-b277-0303cfe15c31-kube-api-access-s5tqx\") pod \"heat-operator-controller-manager-77b6666d85-dqp55\" (UID: \"f083454e-bbf9-4d06-b277-0303cfe15c31\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:22:44.641935 master-0 kubenswrapper[31456]: I0312 21:22:44.641893 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brwx\" (UniqueName: \"kubernetes.io/projected/6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3-kube-api-access-5brwx\") pod \"glance-operator-controller-manager-5964f64c48-9qk2q\" (UID: \"6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:22:44.647186 master-0 kubenswrapper[31456]: I0312 21:22:44.646944 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579"] Mar 12 21:22:44.665227 master-0 kubenswrapper[31456]: I0312 21:22:44.665147 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:22:44.682029 master-0 kubenswrapper[31456]: I0312 21:22:44.680187 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: I0312 21:22:44.693533 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nqrk\" (UniqueName: \"kubernetes.io/projected/fee2ceac-7ca8-416a-a8aa-e80cc6b37755-kube-api-access-7nqrk\") pod \"horizon-operator-controller-manager-6d9d6b584d-xc2vn\" (UID: \"fee2ceac-7ca8-416a-a8aa-e80cc6b37755\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: I0312 21:22:44.693601 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d79w\" (UniqueName: \"kubernetes.io/projected/82f17f4c-c741-4cc8-8b68-c26ca155288d-kube-api-access-9d79w\") pod \"ironic-operator-controller-manager-6bbb499bbc-29sh2\" (UID: \"82f17f4c-c741-4cc8-8b68-c26ca155288d\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: I0312 21:22:44.693628 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: I0312 21:22:44.693721 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrjs7\" (UniqueName: \"kubernetes.io/projected/c0d101d2-bf56-4410-8499-987107f3bc9f-kube-api-access-zrjs7\") pod \"keystone-operator-controller-manager-684f77d66d-mp579\" (UID: \"c0d101d2-bf56-4410-8499-987107f3bc9f\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: I0312 21:22:44.693769 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rd5\" (UniqueName: \"kubernetes.io/projected/392ee3fe-88fb-47f2-834a-115559661320-kube-api-access-99rd5\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: E0312 21:22:44.694057 31456 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:44.694492 master-0 kubenswrapper[31456]: E0312 21:22:44.694142 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert podName:392ee3fe-88fb-47f2-834a-115559661320 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:45.194121535 +0000 UTC m=+826.268726863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert") pod "infra-operator-controller-manager-b8c8d7cc8-d9mhv" (UID: "392ee3fe-88fb-47f2-834a-115559661320") : secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:44.726564 master-0 kubenswrapper[31456]: I0312 21:22:44.722754 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rd5\" (UniqueName: \"kubernetes.io/projected/392ee3fe-88fb-47f2-834a-115559661320-kube-api-access-99rd5\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:44.727477 master-0 kubenswrapper[31456]: I0312 21:22:44.727213 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d79w\" (UniqueName: \"kubernetes.io/projected/82f17f4c-c741-4cc8-8b68-c26ca155288d-kube-api-access-9d79w\") pod \"ironic-operator-controller-manager-6bbb499bbc-29sh2\" (UID: \"82f17f4c-c741-4cc8-8b68-c26ca155288d\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:22:44.727855 master-0 kubenswrapper[31456]: I0312 21:22:44.727720 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:22:44.737857 master-0 kubenswrapper[31456]: I0312 21:22:44.734002 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nqrk\" (UniqueName: \"kubernetes.io/projected/fee2ceac-7ca8-416a-a8aa-e80cc6b37755-kube-api-access-7nqrk\") pod \"horizon-operator-controller-manager-6d9d6b584d-xc2vn\" (UID: \"fee2ceac-7ca8-416a-a8aa-e80cc6b37755\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:22:44.739482 master-0 kubenswrapper[31456]: I0312 21:22:44.739405 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5"] Mar 12 21:22:44.745745 master-0 kubenswrapper[31456]: I0312 21:22:44.745688 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:22:44.761536 master-0 kubenswrapper[31456]: I0312 21:22:44.749696 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:22:44.761536 master-0 kubenswrapper[31456]: I0312 21:22:44.758402 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:22:44.791940 master-0 kubenswrapper[31456]: I0312 21:22:44.791897 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:22:44.804841 master-0 kubenswrapper[31456]: I0312 21:22:44.797533 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48t5k\" (UniqueName: \"kubernetes.io/projected/171f4970-bb03-4ac6-86b1-47cf6639cccd-kube-api-access-48t5k\") pod \"manila-operator-controller-manager-68f45f9d9f-qn4x5\" (UID: \"171f4970-bb03-4ac6-86b1-47cf6639cccd\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:22:44.804841 master-0 kubenswrapper[31456]: I0312 21:22:44.799425 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrjs7\" (UniqueName: \"kubernetes.io/projected/c0d101d2-bf56-4410-8499-987107f3bc9f-kube-api-access-zrjs7\") pod \"keystone-operator-controller-manager-684f77d66d-mp579\" (UID: \"c0d101d2-bf56-4410-8499-987107f3bc9f\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:22:44.835118 master-0 kubenswrapper[31456]: I0312 21:22:44.833172 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws"] Mar 12 21:22:44.835118 master-0 kubenswrapper[31456]: I0312 21:22:44.834532 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:22:44.841471 master-0 kubenswrapper[31456]: I0312 21:22:44.841428 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrjs7\" (UniqueName: \"kubernetes.io/projected/c0d101d2-bf56-4410-8499-987107f3bc9f-kube-api-access-zrjs7\") pod \"keystone-operator-controller-manager-684f77d66d-mp579\" (UID: \"c0d101d2-bf56-4410-8499-987107f3bc9f\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:22:44.879138 master-0 kubenswrapper[31456]: I0312 21:22:44.876373 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5"] Mar 12 21:22:44.903139 master-0 kubenswrapper[31456]: I0312 21:22:44.901691 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8q9f\" (UniqueName: \"kubernetes.io/projected/be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4-kube-api-access-n8q9f\") pod \"mariadb-operator-controller-manager-658d4cdd5-sjbws\" (UID: \"be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:22:44.903139 master-0 kubenswrapper[31456]: I0312 21:22:44.901929 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48t5k\" (UniqueName: \"kubernetes.io/projected/171f4970-bb03-4ac6-86b1-47cf6639cccd-kube-api-access-48t5k\") pod \"manila-operator-controller-manager-68f45f9d9f-qn4x5\" (UID: \"171f4970-bb03-4ac6-86b1-47cf6639cccd\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:22:44.924854 master-0 kubenswrapper[31456]: I0312 21:22:44.924328 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:22:44.938626 master-0 kubenswrapper[31456]: I0312 21:22:44.934012 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws"] Mar 12 21:22:44.941794 master-0 kubenswrapper[31456]: I0312 21:22:44.941624 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48t5k\" (UniqueName: \"kubernetes.io/projected/171f4970-bb03-4ac6-86b1-47cf6639cccd-kube-api-access-48t5k\") pod \"manila-operator-controller-manager-68f45f9d9f-qn4x5\" (UID: \"171f4970-bb03-4ac6-86b1-47cf6639cccd\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:22:45.043107 master-0 kubenswrapper[31456]: I0312 21:22:45.042863 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8q9f\" (UniqueName: \"kubernetes.io/projected/be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4-kube-api-access-n8q9f\") pod \"mariadb-operator-controller-manager-658d4cdd5-sjbws\" (UID: \"be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:22:45.069076 master-0 kubenswrapper[31456]: I0312 21:22:45.067837 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw"] Mar 12 21:22:45.073320 master-0 kubenswrapper[31456]: I0312 21:22:45.071498 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:22:45.095370 master-0 kubenswrapper[31456]: I0312 21:22:45.093696 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw"] Mar 12 21:22:45.102745 master-0 kubenswrapper[31456]: I0312 21:22:45.099920 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8q9f\" (UniqueName: \"kubernetes.io/projected/be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4-kube-api-access-n8q9f\") pod \"mariadb-operator-controller-manager-658d4cdd5-sjbws\" (UID: \"be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:22:45.116071 master-0 kubenswrapper[31456]: I0312 21:22:45.112930 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz"] Mar 12 21:22:45.116071 master-0 kubenswrapper[31456]: I0312 21:22:45.114259 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:22:45.116071 master-0 kubenswrapper[31456]: I0312 21:22:45.114682 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:22:45.118212 master-0 kubenswrapper[31456]: I0312 21:22:45.118185 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx"] Mar 12 21:22:45.126127 master-0 kubenswrapper[31456]: I0312 21:22:45.124914 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:22:45.142915 master-0 kubenswrapper[31456]: I0312 21:22:45.136083 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz"] Mar 12 21:22:45.142915 master-0 kubenswrapper[31456]: I0312 21:22:45.140214 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:22:45.147196 master-0 kubenswrapper[31456]: I0312 21:22:45.147141 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wtl\" (UniqueName: \"kubernetes.io/projected/772ae1ba-3abe-49d4-ade9-b0aac087acf2-kube-api-access-f5wtl\") pod \"octavia-operator-controller-manager-5f4f55cb5c-pjrnx\" (UID: \"772ae1ba-3abe-49d4-ade9-b0aac087acf2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:22:45.147394 master-0 kubenswrapper[31456]: I0312 21:22:45.147368 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f485\" (UniqueName: \"kubernetes.io/projected/10dbd20e-560a-4ded-8caa-c72c8c11d865-kube-api-access-5f485\") pod \"neutron-operator-controller-manager-776c5696bf-l9rhw\" (UID: \"10dbd20e-560a-4ded-8caa-c72c8c11d865\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:22:45.147437 master-0 kubenswrapper[31456]: I0312 21:22:45.147400 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm54q\" (UniqueName: \"kubernetes.io/projected/00bedb2f-42ce-446e-84ce-0511132bc5bd-kube-api-access-tm54q\") pod \"nova-operator-controller-manager-569cc54c5-97lcz\" (UID: \"00bedb2f-42ce-446e-84ce-0511132bc5bd\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:22:45.213236 master-0 kubenswrapper[31456]: I0312 21:22:45.212303 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.247037 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx"] Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.247094 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7"] Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.250611 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn"] Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.251640 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.251845 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wtl\" (UniqueName: \"kubernetes.io/projected/772ae1ba-3abe-49d4-ade9-b0aac087acf2-kube-api-access-f5wtl\") pod \"octavia-operator-controller-manager-5f4f55cb5c-pjrnx\" (UID: \"772ae1ba-3abe-49d4-ade9-b0aac087acf2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.252106 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.252135 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f485\" (UniqueName: \"kubernetes.io/projected/10dbd20e-560a-4ded-8caa-c72c8c11d865-kube-api-access-5f485\") pod \"neutron-operator-controller-manager-776c5696bf-l9rhw\" (UID: \"10dbd20e-560a-4ded-8caa-c72c8c11d865\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.252163 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm54q\" (UniqueName: \"kubernetes.io/projected/00bedb2f-42ce-446e-84ce-0511132bc5bd-kube-api-access-tm54q\") pod \"nova-operator-controller-manager-569cc54c5-97lcz\" (UID: \"00bedb2f-42ce-446e-84ce-0511132bc5bd\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: I0312 21:22:45.252199 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.261264 master-0 kubenswrapper[31456]: E0312 21:22:45.252734 31456 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:45.265385 master-0 kubenswrapper[31456]: E0312 21:22:45.252837 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert podName:392ee3fe-88fb-47f2-834a-115559661320 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:46.252789303 +0000 UTC m=+827.327394631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert") pod "infra-operator-controller-manager-b8c8d7cc8-d9mhv" (UID: "392ee3fe-88fb-47f2-834a-115559661320") : secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:45.265473 master-0 kubenswrapper[31456]: I0312 21:22:45.263098 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 21:22:45.297970 master-0 kubenswrapper[31456]: I0312 21:22:45.297115 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7"] Mar 12 21:22:45.306313 master-0 kubenswrapper[31456]: I0312 21:22:45.304426 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm54q\" (UniqueName: \"kubernetes.io/projected/00bedb2f-42ce-446e-84ce-0511132bc5bd-kube-api-access-tm54q\") pod \"nova-operator-controller-manager-569cc54c5-97lcz\" (UID: \"00bedb2f-42ce-446e-84ce-0511132bc5bd\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:22:45.314887 master-0 kubenswrapper[31456]: I0312 21:22:45.311595 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f485\" (UniqueName: \"kubernetes.io/projected/10dbd20e-560a-4ded-8caa-c72c8c11d865-kube-api-access-5f485\") pod \"neutron-operator-controller-manager-776c5696bf-l9rhw\" (UID: \"10dbd20e-560a-4ded-8caa-c72c8c11d865\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:22:45.314887 master-0 kubenswrapper[31456]: I0312 21:22:45.312292 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wtl\" (UniqueName: \"kubernetes.io/projected/772ae1ba-3abe-49d4-ade9-b0aac087acf2-kube-api-access-f5wtl\") pod \"octavia-operator-controller-manager-5f4f55cb5c-pjrnx\" (UID: \"772ae1ba-3abe-49d4-ade9-b0aac087acf2\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:22:45.333366 master-0 kubenswrapper[31456]: I0312 21:22:45.331170 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn"] Mar 12 21:22:45.354722 master-0 kubenswrapper[31456]: I0312 21:22:45.354173 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bg2f\" (UniqueName: \"kubernetes.io/projected/958532da-0a93-4ca7-8f90-ec711c5e2424-kube-api-access-8bg2f\") pod \"ovn-operator-controller-manager-bbc5b68f9-w6dxn\" (UID: \"958532da-0a93-4ca7-8f90-ec711c5e2424\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:22:45.354722 master-0 kubenswrapper[31456]: I0312 21:22:45.354240 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.354722 master-0 kubenswrapper[31456]: I0312 21:22:45.354304 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zqz\" (UniqueName: \"kubernetes.io/projected/bf444169-4293-48aa-ac84-6c38836cd316-kube-api-access-j7zqz\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.433843 master-0 kubenswrapper[31456]: I0312 21:22:45.430257 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6"] Mar 12 21:22:45.433843 master-0 kubenswrapper[31456]: I0312 21:22:45.431878 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:22:45.448433 master-0 kubenswrapper[31456]: I0312 21:22:45.447483 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:22:45.457307 master-0 kubenswrapper[31456]: I0312 21:22:45.455818 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bg2f\" (UniqueName: \"kubernetes.io/projected/958532da-0a93-4ca7-8f90-ec711c5e2424-kube-api-access-8bg2f\") pod \"ovn-operator-controller-manager-bbc5b68f9-w6dxn\" (UID: \"958532da-0a93-4ca7-8f90-ec711c5e2424\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:22:45.457307 master-0 kubenswrapper[31456]: I0312 21:22:45.455876 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.457307 master-0 kubenswrapper[31456]: I0312 21:22:45.455957 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g5h8\" (UniqueName: \"kubernetes.io/projected/fc382ab4-53d2-4db1-b50d-651c61e8e4fd-kube-api-access-6g5h8\") pod \"placement-operator-controller-manager-574d45c66c-gtjq6\" (UID: \"fc382ab4-53d2-4db1-b50d-651c61e8e4fd\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:22:45.457307 master-0 kubenswrapper[31456]: I0312 21:22:45.455989 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zqz\" (UniqueName: \"kubernetes.io/projected/bf444169-4293-48aa-ac84-6c38836cd316-kube-api-access-j7zqz\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.457307 master-0 kubenswrapper[31456]: E0312 21:22:45.456596 31456 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:45.457307 master-0 kubenswrapper[31456]: E0312 21:22:45.456645 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert podName:bf444169-4293-48aa-ac84-6c38836cd316 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:45.956626246 +0000 UTC m=+827.031231574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" (UID: "bf444169-4293-48aa-ac84-6c38836cd316") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:45.476061 master-0 kubenswrapper[31456]: I0312 21:22:45.476007 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6"] Mar 12 21:22:45.488091 master-0 kubenswrapper[31456]: I0312 21:22:45.488044 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-t72b8"] Mar 12 21:22:45.489663 master-0 kubenswrapper[31456]: I0312 21:22:45.489637 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:22:45.500888 master-0 kubenswrapper[31456]: I0312 21:22:45.498629 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-t72b8"] Mar 12 21:22:45.500888 master-0 kubenswrapper[31456]: I0312 21:22:45.499647 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zqz\" (UniqueName: \"kubernetes.io/projected/bf444169-4293-48aa-ac84-6c38836cd316-kube-api-access-j7zqz\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.504228 master-0 kubenswrapper[31456]: I0312 21:22:45.504169 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bg2f\" (UniqueName: \"kubernetes.io/projected/958532da-0a93-4ca7-8f90-ec711c5e2424-kube-api-access-8bg2f\") pod \"ovn-operator-controller-manager-bbc5b68f9-w6dxn\" (UID: \"958532da-0a93-4ca7-8f90-ec711c5e2424\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:22:45.510673 master-0 kubenswrapper[31456]: I0312 21:22:45.510628 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c"] Mar 12 21:22:45.512492 master-0 kubenswrapper[31456]: I0312 21:22:45.512470 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:22:45.541029 master-0 kubenswrapper[31456]: I0312 21:22:45.540665 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr"] Mar 12 21:22:45.542369 master-0 kubenswrapper[31456]: I0312 21:22:45.542346 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:22:45.553550 master-0 kubenswrapper[31456]: I0312 21:22:45.553485 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:22:45.559055 master-0 kubenswrapper[31456]: I0312 21:22:45.558544 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbp5\" (UniqueName: \"kubernetes.io/projected/0bac1947-ff1d-40f0-8b0c-25132780f302-kube-api-access-pvbp5\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-4r28c\" (UID: \"0bac1947-ff1d-40f0-8b0c-25132780f302\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:22:45.559055 master-0 kubenswrapper[31456]: I0312 21:22:45.558644 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nx9j\" (UniqueName: \"kubernetes.io/projected/965c850a-6d1c-4824-b254-6bde6b919001-kube-api-access-6nx9j\") pod \"swift-operator-controller-manager-677c674df7-t72b8\" (UID: \"965c850a-6d1c-4824-b254-6bde6b919001\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:22:45.559055 master-0 kubenswrapper[31456]: I0312 21:22:45.558729 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g5h8\" (UniqueName: \"kubernetes.io/projected/fc382ab4-53d2-4db1-b50d-651c61e8e4fd-kube-api-access-6g5h8\") pod \"placement-operator-controller-manager-574d45c66c-gtjq6\" (UID: \"fc382ab4-53d2-4db1-b50d-651c61e8e4fd\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:22:45.592688 master-0 kubenswrapper[31456]: I0312 21:22:45.592020 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g5h8\" (UniqueName: \"kubernetes.io/projected/fc382ab4-53d2-4db1-b50d-651c61e8e4fd-kube-api-access-6g5h8\") pod \"placement-operator-controller-manager-574d45c66c-gtjq6\" (UID: \"fc382ab4-53d2-4db1-b50d-651c61e8e4fd\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:22:45.592688 master-0 kubenswrapper[31456]: I0312 21:22:45.592104 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c"] Mar 12 21:22:45.611094 master-0 kubenswrapper[31456]: I0312 21:22:45.611044 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:22:45.628629 master-0 kubenswrapper[31456]: I0312 21:22:45.623801 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr"] Mar 12 21:22:45.641143 master-0 kubenswrapper[31456]: I0312 21:22:45.641069 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr"] Mar 12 21:22:45.642664 master-0 kubenswrapper[31456]: I0312 21:22:45.642631 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:22:45.650056 master-0 kubenswrapper[31456]: I0312 21:22:45.650022 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr"] Mar 12 21:22:45.665437 master-0 kubenswrapper[31456]: I0312 21:22:45.660915 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf44w\" (UniqueName: \"kubernetes.io/projected/e0249756-1137-4101-ab47-90c11635a800-kube-api-access-gf44w\") pod \"test-operator-controller-manager-5c5cb9c4d7-bxrjr\" (UID: \"e0249756-1137-4101-ab47-90c11635a800\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:22:45.665437 master-0 kubenswrapper[31456]: I0312 21:22:45.661025 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcqm\" (UniqueName: \"kubernetes.io/projected/1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb-kube-api-access-qxcqm\") pod \"watcher-operator-controller-manager-6dd88c6f67-fnpjr\" (UID: \"1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:22:45.665437 master-0 kubenswrapper[31456]: I0312 21:22:45.661085 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbp5\" (UniqueName: \"kubernetes.io/projected/0bac1947-ff1d-40f0-8b0c-25132780f302-kube-api-access-pvbp5\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-4r28c\" (UID: \"0bac1947-ff1d-40f0-8b0c-25132780f302\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:22:45.665437 master-0 kubenswrapper[31456]: I0312 21:22:45.662989 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nx9j\" (UniqueName: \"kubernetes.io/projected/965c850a-6d1c-4824-b254-6bde6b919001-kube-api-access-6nx9j\") pod \"swift-operator-controller-manager-677c674df7-t72b8\" (UID: \"965c850a-6d1c-4824-b254-6bde6b919001\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:22:45.672526 master-0 kubenswrapper[31456]: I0312 21:22:45.671167 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:22:45.679972 master-0 kubenswrapper[31456]: I0312 21:22:45.679932 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbp5\" (UniqueName: \"kubernetes.io/projected/0bac1947-ff1d-40f0-8b0c-25132780f302-kube-api-access-pvbp5\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-4r28c\" (UID: \"0bac1947-ff1d-40f0-8b0c-25132780f302\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:22:45.684548 master-0 kubenswrapper[31456]: I0312 21:22:45.684512 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nx9j\" (UniqueName: \"kubernetes.io/projected/965c850a-6d1c-4824-b254-6bde6b919001-kube-api-access-6nx9j\") pod \"swift-operator-controller-manager-677c674df7-t72b8\" (UID: \"965c850a-6d1c-4824-b254-6bde6b919001\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:22:45.700581 master-0 kubenswrapper[31456]: I0312 21:22:45.700507 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2"] Mar 12 21:22:45.701936 master-0 kubenswrapper[31456]: I0312 21:22:45.701899 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.703434 master-0 kubenswrapper[31456]: I0312 21:22:45.703398 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:22:45.709353 master-0 kubenswrapper[31456]: I0312 21:22:45.708104 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 21:22:45.709353 master-0 kubenswrapper[31456]: I0312 21:22:45.708313 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 21:22:45.716909 master-0 kubenswrapper[31456]: I0312 21:22:45.716853 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:22:45.725711 master-0 kubenswrapper[31456]: I0312 21:22:45.725330 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2"] Mar 12 21:22:45.734397 master-0 kubenswrapper[31456]: I0312 21:22:45.734320 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:22:45.785131 master-0 kubenswrapper[31456]: I0312 21:22:45.777032 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcqm\" (UniqueName: \"kubernetes.io/projected/1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb-kube-api-access-qxcqm\") pod \"watcher-operator-controller-manager-6dd88c6f67-fnpjr\" (UID: \"1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:22:45.785131 master-0 kubenswrapper[31456]: I0312 21:22:45.777106 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.785131 master-0 kubenswrapper[31456]: I0312 21:22:45.777199 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rzm\" (UniqueName: \"kubernetes.io/projected/e3c680b7-4c4e-45a5-839c-a07be817bcab-kube-api-access-n6rzm\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.785131 master-0 kubenswrapper[31456]: I0312 21:22:45.778602 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.785131 master-0 kubenswrapper[31456]: I0312 21:22:45.778859 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf44w\" (UniqueName: \"kubernetes.io/projected/e0249756-1137-4101-ab47-90c11635a800-kube-api-access-gf44w\") pod \"test-operator-controller-manager-5c5cb9c4d7-bxrjr\" (UID: \"e0249756-1137-4101-ab47-90c11635a800\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:22:45.795258 master-0 kubenswrapper[31456]: I0312 21:22:45.795215 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf44w\" (UniqueName: \"kubernetes.io/projected/e0249756-1137-4101-ab47-90c11635a800-kube-api-access-gf44w\") pod \"test-operator-controller-manager-5c5cb9c4d7-bxrjr\" (UID: \"e0249756-1137-4101-ab47-90c11635a800\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:22:45.797275 master-0 kubenswrapper[31456]: I0312 21:22:45.797212 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" event={"ID":"072c42ef-c704-4430-ae96-ba686e7a9e48","Type":"ContainerStarted","Data":"d1c8f4ee8c80d606153d6254757093518750b7a7bca35950b8ad5e6db57b1456"} Mar 12 21:22:45.799058 master-0 kubenswrapper[31456]: I0312 21:22:45.798992 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" event={"ID":"db5b5da6-7eaa-4e23-ad31-7e977fd52810","Type":"ContainerStarted","Data":"b1c383cf3b642a0310ba67c36366ab322b9b96abc3a168cdb36b42de47b3bcce"} Mar 12 21:22:45.805927 master-0 kubenswrapper[31456]: I0312 21:22:45.805872 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t"] Mar 12 21:22:45.821306 master-0 kubenswrapper[31456]: I0312 21:22:45.816921 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c"] Mar 12 21:22:45.821306 master-0 kubenswrapper[31456]: I0312 21:22:45.818096 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" Mar 12 21:22:45.826401 master-0 kubenswrapper[31456]: I0312 21:22:45.826115 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c"] Mar 12 21:22:45.838710 master-0 kubenswrapper[31456]: I0312 21:22:45.838572 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcqm\" (UniqueName: \"kubernetes.io/projected/1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb-kube-api-access-qxcqm\") pod \"watcher-operator-controller-manager-6dd88c6f67-fnpjr\" (UID: \"1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:22:45.881200 master-0 kubenswrapper[31456]: I0312 21:22:45.881069 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.881549 master-0 kubenswrapper[31456]: E0312 21:22:45.881214 31456 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 21:22:45.881549 master-0 kubenswrapper[31456]: E0312 21:22:45.881292 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:46.381274013 +0000 UTC m=+827.455879341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "webhook-server-cert" not found Mar 12 21:22:45.881549 master-0 kubenswrapper[31456]: E0312 21:22:45.881322 31456 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 21:22:45.881549 master-0 kubenswrapper[31456]: E0312 21:22:45.881378 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:46.381360415 +0000 UTC m=+827.455965743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "metrics-server-cert" not found Mar 12 21:22:45.881549 master-0 kubenswrapper[31456]: I0312 21:22:45.881220 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.882759 master-0 kubenswrapper[31456]: I0312 21:22:45.881673 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xq9\" (UniqueName: \"kubernetes.io/projected/e2d93f6b-adfe-4928-8921-2e1e2cf01682-kube-api-access-z5xq9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2nr4c\" (UID: \"e2d93f6b-adfe-4928-8921-2e1e2cf01682\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" Mar 12 21:22:45.882759 master-0 kubenswrapper[31456]: I0312 21:22:45.881767 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rzm\" (UniqueName: \"kubernetes.io/projected/e3c680b7-4c4e-45a5-839c-a07be817bcab-kube-api-access-n6rzm\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.886217 master-0 kubenswrapper[31456]: I0312 21:22:45.886141 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn"] Mar 12 21:22:45.926014 master-0 kubenswrapper[31456]: I0312 21:22:45.913677 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rzm\" (UniqueName: \"kubernetes.io/projected/e3c680b7-4c4e-45a5-839c-a07be817bcab-kube-api-access-n6rzm\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:45.984987 master-0 kubenswrapper[31456]: I0312 21:22:45.984882 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xq9\" (UniqueName: \"kubernetes.io/projected/e2d93f6b-adfe-4928-8921-2e1e2cf01682-kube-api-access-z5xq9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2nr4c\" (UID: \"e2d93f6b-adfe-4928-8921-2e1e2cf01682\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" Mar 12 21:22:45.985199 master-0 kubenswrapper[31456]: I0312 21:22:45.985000 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:45.985417 master-0 kubenswrapper[31456]: E0312 21:22:45.985364 31456 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:45.985515 master-0 kubenswrapper[31456]: E0312 21:22:45.985484 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert podName:bf444169-4293-48aa-ac84-6c38836cd316 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:46.98545663 +0000 UTC m=+828.060061958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" (UID: "bf444169-4293-48aa-ac84-6c38836cd316") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:46.008267 master-0 kubenswrapper[31456]: I0312 21:22:46.008198 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xq9\" (UniqueName: \"kubernetes.io/projected/e2d93f6b-adfe-4928-8921-2e1e2cf01682-kube-api-access-z5xq9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2nr4c\" (UID: \"e2d93f6b-adfe-4928-8921-2e1e2cf01682\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" Mar 12 21:22:46.036157 master-0 kubenswrapper[31456]: I0312 21:22:46.036070 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2"] Mar 12 21:22:46.051370 master-0 kubenswrapper[31456]: I0312 21:22:46.051323 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:22:46.059504 master-0 kubenswrapper[31456]: I0312 21:22:46.058618 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:22:46.158929 master-0 kubenswrapper[31456]: I0312 21:22:46.158522 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" Mar 12 21:22:46.202403 master-0 kubenswrapper[31456]: I0312 21:22:46.202367 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55"] Mar 12 21:22:46.219162 master-0 kubenswrapper[31456]: I0312 21:22:46.214662 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv"] Mar 12 21:22:46.297177 master-0 kubenswrapper[31456]: I0312 21:22:46.296864 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:46.297473 master-0 kubenswrapper[31456]: E0312 21:22:46.297270 31456 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:46.297473 master-0 kubenswrapper[31456]: E0312 21:22:46.297425 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert podName:392ee3fe-88fb-47f2-834a-115559661320 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:48.297389155 +0000 UTC m=+829.371994683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert") pod "infra-operator-controller-manager-b8c8d7cc8-d9mhv" (UID: "392ee3fe-88fb-47f2-834a-115559661320") : secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:46.398961 master-0 kubenswrapper[31456]: I0312 21:22:46.398888 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:46.399657 master-0 kubenswrapper[31456]: I0312 21:22:46.399014 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:46.399657 master-0 kubenswrapper[31456]: E0312 21:22:46.399166 31456 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 21:22:46.399657 master-0 kubenswrapper[31456]: E0312 21:22:46.399218 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:47.399201883 +0000 UTC m=+828.473807211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "webhook-server-cert" not found Mar 12 21:22:46.399657 master-0 kubenswrapper[31456]: E0312 21:22:46.399349 31456 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 21:22:46.399657 master-0 kubenswrapper[31456]: E0312 21:22:46.399463 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:47.39943025 +0000 UTC m=+828.474035638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "metrics-server-cert" not found Mar 12 21:22:46.475523 master-0 kubenswrapper[31456]: W0312 21:22:46.474126 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe6f68c8_9c6c_4fa3_b1b5_2205851ae9d4.slice/crio-04bbc6649444ac374436d74b1c846f4cfcfa351f0a43a2ba9c00e052c23dcc67 WatchSource:0}: Error finding container 04bbc6649444ac374436d74b1c846f4cfcfa351f0a43a2ba9c00e052c23dcc67: Status 404 returned error can't find the container with id 04bbc6649444ac374436d74b1c846f4cfcfa351f0a43a2ba9c00e052c23dcc67 Mar 12 21:22:46.498864 master-0 kubenswrapper[31456]: I0312 21:22:46.498687 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws"] Mar 12 21:22:46.503355 master-0 kubenswrapper[31456]: W0312 21:22:46.503305 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d01e27b_a1a1_4afb_a75b_4f7063e5c0d3.slice/crio-7236be367f5ba40230221999d8d202b66b33bb5ed673733727e229fa2e1b0b80 WatchSource:0}: Error finding container 7236be367f5ba40230221999d8d202b66b33bb5ed673733727e229fa2e1b0b80: Status 404 returned error can't find the container with id 7236be367f5ba40230221999d8d202b66b33bb5ed673733727e229fa2e1b0b80 Mar 12 21:22:46.514071 master-0 kubenswrapper[31456]: W0312 21:22:46.514025 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0d101d2_bf56_4410_8499_987107f3bc9f.slice/crio-37cd629a0d7f9b1040b0c79edafa2fb6d5259f50ca26c32e03a47c4c18895061 WatchSource:0}: Error finding container 37cd629a0d7f9b1040b0c79edafa2fb6d5259f50ca26c32e03a47c4c18895061: Status 404 returned error can't find the container with id 37cd629a0d7f9b1040b0c79edafa2fb6d5259f50ca26c32e03a47c4c18895061 Mar 12 21:22:46.531077 master-0 kubenswrapper[31456]: I0312 21:22:46.531016 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q"] Mar 12 21:22:46.553954 master-0 kubenswrapper[31456]: I0312 21:22:46.553881 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn"] Mar 12 21:22:46.570824 master-0 kubenswrapper[31456]: I0312 21:22:46.570744 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579"] Mar 12 21:22:46.692011 master-0 kubenswrapper[31456]: W0312 21:22:46.691943 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00bedb2f_42ce_446e_84ce_0511132bc5bd.slice/crio-4c2e0fa409dae36b958383eda758095f1a2170f1971deabb7b0d23652f5d6b5a WatchSource:0}: Error finding container 4c2e0fa409dae36b958383eda758095f1a2170f1971deabb7b0d23652f5d6b5a: Status 404 returned error can't find the container with id 4c2e0fa409dae36b958383eda758095f1a2170f1971deabb7b0d23652f5d6b5a Mar 12 21:22:46.721899 master-0 kubenswrapper[31456]: I0312 21:22:46.721778 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz"] Mar 12 21:22:46.747130 master-0 kubenswrapper[31456]: I0312 21:22:46.747029 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5"] Mar 12 21:22:46.761684 master-0 kubenswrapper[31456]: I0312 21:22:46.761627 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw"] Mar 12 21:22:46.847315 master-0 kubenswrapper[31456]: I0312 21:22:46.840391 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" event={"ID":"f083454e-bbf9-4d06-b277-0303cfe15c31","Type":"ContainerStarted","Data":"c7b040a257b2da40b9accfebee430e57d249291134318e4f8a48249f226f6469"} Mar 12 21:22:46.847315 master-0 kubenswrapper[31456]: I0312 21:22:46.843960 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" event={"ID":"b981b7c7-773d-4c60-a591-3e6fbb6fdacd","Type":"ContainerStarted","Data":"a07738c9cfbb0a360ef1a57897e206398bec542cf5564121944c4aa6b674fd17"} Mar 12 21:22:46.847315 master-0 kubenswrapper[31456]: I0312 21:22:46.846110 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" event={"ID":"c0d101d2-bf56-4410-8499-987107f3bc9f","Type":"ContainerStarted","Data":"37cd629a0d7f9b1040b0c79edafa2fb6d5259f50ca26c32e03a47c4c18895061"} Mar 12 21:22:46.852006 master-0 kubenswrapper[31456]: I0312 21:22:46.851936 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" event={"ID":"00bedb2f-42ce-446e-84ce-0511132bc5bd","Type":"ContainerStarted","Data":"4c2e0fa409dae36b958383eda758095f1a2170f1971deabb7b0d23652f5d6b5a"} Mar 12 21:22:46.861368 master-0 kubenswrapper[31456]: I0312 21:22:46.856592 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" event={"ID":"171f4970-bb03-4ac6-86b1-47cf6639cccd","Type":"ContainerStarted","Data":"884a3253f2dfc15b78520b084407e70d1f58a6a5139c01320e2fe181452a57f1"} Mar 12 21:22:46.861368 master-0 kubenswrapper[31456]: I0312 21:22:46.857673 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" event={"ID":"6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3","Type":"ContainerStarted","Data":"7236be367f5ba40230221999d8d202b66b33bb5ed673733727e229fa2e1b0b80"} Mar 12 21:22:46.861368 master-0 kubenswrapper[31456]: I0312 21:22:46.859047 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" event={"ID":"82f17f4c-c741-4cc8-8b68-c26ca155288d","Type":"ContainerStarted","Data":"8190a3f6f42521a0e48ecb733f603cd23ead628970cfe1465646a7a47b3a3fc6"} Mar 12 21:22:46.861368 master-0 kubenswrapper[31456]: I0312 21:22:46.861267 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" event={"ID":"be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4","Type":"ContainerStarted","Data":"04bbc6649444ac374436d74b1c846f4cfcfa351f0a43a2ba9c00e052c23dcc67"} Mar 12 21:22:46.864315 master-0 kubenswrapper[31456]: I0312 21:22:46.863356 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" event={"ID":"fee2ceac-7ca8-416a-a8aa-e80cc6b37755","Type":"ContainerStarted","Data":"07631cee648e26a4846b3327aa05e32cdfe2fb9d57a6ccd2ae79a044c42aecd9"} Mar 12 21:22:46.871631 master-0 kubenswrapper[31456]: I0312 21:22:46.870625 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" event={"ID":"10dbd20e-560a-4ded-8caa-c72c8c11d865","Type":"ContainerStarted","Data":"ea15c5e619860948881f6bdff40eb03a505c33f811a881bdaafafc5e05c156e9"} Mar 12 21:22:47.031211 master-0 kubenswrapper[31456]: I0312 21:22:47.030968 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:47.050096 master-0 kubenswrapper[31456]: E0312 21:22:47.031223 31456 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:47.050096 master-0 kubenswrapper[31456]: E0312 21:22:47.031342 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert podName:bf444169-4293-48aa-ac84-6c38836cd316 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:49.031318273 +0000 UTC m=+830.105923601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" (UID: "bf444169-4293-48aa-ac84-6c38836cd316") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:47.427995 master-0 kubenswrapper[31456]: I0312 21:22:47.415071 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr"] Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: I0312 21:22:47.455913 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: I0312 21:22:47.456060 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: E0312 21:22:47.456204 31456 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: E0312 21:22:47.456277 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:49.456237687 +0000 UTC m=+830.530843015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "metrics-server-cert" not found Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: E0312 21:22:47.456640 31456 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: E0312 21:22:47.456667 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:49.456660248 +0000 UTC m=+830.531265576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "webhook-server-cert" not found Mar 12 21:22:47.458183 master-0 kubenswrapper[31456]: I0312 21:22:47.457555 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c"] Mar 12 21:22:47.505126 master-0 kubenswrapper[31456]: W0312 21:22:47.503608 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc382ab4_53d2_4db1_b50d_651c61e8e4fd.slice/crio-010dd753fbb61360fd48cac965e1c9c5441fa4788decfc4ee3612fe26f17726f WatchSource:0}: Error finding container 010dd753fbb61360fd48cac965e1c9c5441fa4788decfc4ee3612fe26f17726f: Status 404 returned error can't find the container with id 010dd753fbb61360fd48cac965e1c9c5441fa4788decfc4ee3612fe26f17726f Mar 12 21:22:47.518562 master-0 kubenswrapper[31456]: W0312 21:22:47.517876 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod965c850a_6d1c_4824_b254_6bde6b919001.slice/crio-82525d86fc96ee18ba54887e33b25ccd1c9919a94f0c32e6d3c442835dbe1774 WatchSource:0}: Error finding container 82525d86fc96ee18ba54887e33b25ccd1c9919a94f0c32e6d3c442835dbe1774: Status 404 returned error can't find the container with id 82525d86fc96ee18ba54887e33b25ccd1c9919a94f0c32e6d3c442835dbe1774 Mar 12 21:22:47.522520 master-0 kubenswrapper[31456]: W0312 21:22:47.522458 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod958532da_0a93_4ca7_8f90_ec711c5e2424.slice/crio-f7f343a2b97a5488bf934530a06a7b84ff2e70c71d4ea55159a77a95eb9aede7 WatchSource:0}: Error finding container f7f343a2b97a5488bf934530a06a7b84ff2e70c71d4ea55159a77a95eb9aede7: Status 404 returned error can't find the container with id f7f343a2b97a5488bf934530a06a7b84ff2e70c71d4ea55159a77a95eb9aede7 Mar 12 21:22:47.524213 master-0 kubenswrapper[31456]: I0312 21:22:47.524165 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6"] Mar 12 21:22:47.592330 master-0 kubenswrapper[31456]: E0312 21:22:47.587563 31456 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5wtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-pjrnx_openstack-operators(772ae1ba-3abe-49d4-ade9-b0aac087acf2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 12 21:22:47.592330 master-0 kubenswrapper[31456]: E0312 21:22:47.588786 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" podUID="772ae1ba-3abe-49d4-ade9-b0aac087acf2" Mar 12 21:22:47.608961 master-0 kubenswrapper[31456]: I0312 21:22:47.608882 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-t72b8"] Mar 12 21:22:47.662111 master-0 kubenswrapper[31456]: I0312 21:22:47.661685 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn"] Mar 12 21:22:47.695610 master-0 kubenswrapper[31456]: I0312 21:22:47.695528 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr"] Mar 12 21:22:47.739461 master-0 kubenswrapper[31456]: I0312 21:22:47.739393 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx"] Mar 12 21:22:47.748445 master-0 kubenswrapper[31456]: I0312 21:22:47.748322 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c"] Mar 12 21:22:47.887106 master-0 kubenswrapper[31456]: I0312 21:22:47.887025 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" event={"ID":"965c850a-6d1c-4824-b254-6bde6b919001","Type":"ContainerStarted","Data":"82525d86fc96ee18ba54887e33b25ccd1c9919a94f0c32e6d3c442835dbe1774"} Mar 12 21:22:47.892925 master-0 kubenswrapper[31456]: I0312 21:22:47.892864 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" event={"ID":"772ae1ba-3abe-49d4-ade9-b0aac087acf2","Type":"ContainerStarted","Data":"a62a83ae05109775f2d880b983d5ead3a6dc0cd392e7522b4e3007cea392dd34"} Mar 12 21:22:47.894422 master-0 kubenswrapper[31456]: E0312 21:22:47.894130 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" podUID="772ae1ba-3abe-49d4-ade9-b0aac087acf2" Mar 12 21:22:47.903666 master-0 kubenswrapper[31456]: I0312 21:22:47.903545 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" event={"ID":"1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb","Type":"ContainerStarted","Data":"8bb2d55f7322548774518494b02224a80c9b7d706fe5250e43a989daa33974c6"} Mar 12 21:22:47.908502 master-0 kubenswrapper[31456]: I0312 21:22:47.908453 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" event={"ID":"0bac1947-ff1d-40f0-8b0c-25132780f302","Type":"ContainerStarted","Data":"295ec9aae95260e58a0d2062676b8cfb680c4c115ee32b6c94e9784378978a56"} Mar 12 21:22:47.927249 master-0 kubenswrapper[31456]: I0312 21:22:47.925509 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" event={"ID":"e0249756-1137-4101-ab47-90c11635a800","Type":"ContainerStarted","Data":"1a763a24e51fd2999e87437cb00230e5ebc34537a279b7e9822bb288f16be4f5"} Mar 12 21:22:47.931113 master-0 kubenswrapper[31456]: I0312 21:22:47.931055 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" event={"ID":"fc382ab4-53d2-4db1-b50d-651c61e8e4fd","Type":"ContainerStarted","Data":"010dd753fbb61360fd48cac965e1c9c5441fa4788decfc4ee3612fe26f17726f"} Mar 12 21:22:47.937409 master-0 kubenswrapper[31456]: I0312 21:22:47.936560 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" event={"ID":"958532da-0a93-4ca7-8f90-ec711c5e2424","Type":"ContainerStarted","Data":"f7f343a2b97a5488bf934530a06a7b84ff2e70c71d4ea55159a77a95eb9aede7"} Mar 12 21:22:47.939876 master-0 kubenswrapper[31456]: I0312 21:22:47.939816 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" event={"ID":"e2d93f6b-adfe-4928-8921-2e1e2cf01682","Type":"ContainerStarted","Data":"fc32290de6df91fc5a0f062fea3217971288448702da3fbf3db87720e3d9439f"} Mar 12 21:22:48.391387 master-0 kubenswrapper[31456]: I0312 21:22:48.390780 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:48.391387 master-0 kubenswrapper[31456]: E0312 21:22:48.390925 31456 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:48.391387 master-0 kubenswrapper[31456]: E0312 21:22:48.391001 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert podName:392ee3fe-88fb-47f2-834a-115559661320 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:52.390970525 +0000 UTC m=+833.465575853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert") pod "infra-operator-controller-manager-b8c8d7cc8-d9mhv" (UID: "392ee3fe-88fb-47f2-834a-115559661320") : secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:48.960930 master-0 kubenswrapper[31456]: E0312 21:22:48.960763 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" podUID="772ae1ba-3abe-49d4-ade9-b0aac087acf2" Mar 12 21:22:49.109828 master-0 kubenswrapper[31456]: I0312 21:22:49.108871 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:49.109828 master-0 kubenswrapper[31456]: E0312 21:22:49.109333 31456 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:49.109828 master-0 kubenswrapper[31456]: E0312 21:22:49.109496 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert podName:bf444169-4293-48aa-ac84-6c38836cd316 nodeName:}" failed. No retries permitted until 2026-03-12 21:22:53.109369237 +0000 UTC m=+834.183974565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" (UID: "bf444169-4293-48aa-ac84-6c38836cd316") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:49.529830 master-0 kubenswrapper[31456]: I0312 21:22:49.522666 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:49.529830 master-0 kubenswrapper[31456]: E0312 21:22:49.522978 31456 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 21:22:49.529830 master-0 kubenswrapper[31456]: E0312 21:22:49.523097 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:53.523073749 +0000 UTC m=+834.597679077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "webhook-server-cert" not found Mar 12 21:22:49.529830 master-0 kubenswrapper[31456]: I0312 21:22:49.523601 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:49.529830 master-0 kubenswrapper[31456]: E0312 21:22:49.524435 31456 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 21:22:49.529830 master-0 kubenswrapper[31456]: E0312 21:22:49.524744 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:22:53.52472485 +0000 UTC m=+834.599330178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "metrics-server-cert" not found Mar 12 21:22:52.394154 master-0 kubenswrapper[31456]: I0312 21:22:52.393800 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:22:52.395558 master-0 kubenswrapper[31456]: E0312 21:22:52.394170 31456 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:52.395558 master-0 kubenswrapper[31456]: E0312 21:22:52.394264 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert podName:392ee3fe-88fb-47f2-834a-115559661320 nodeName:}" failed. No retries permitted until 2026-03-12 21:23:00.394244107 +0000 UTC m=+841.468849435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert") pod "infra-operator-controller-manager-b8c8d7cc8-d9mhv" (UID: "392ee3fe-88fb-47f2-834a-115559661320") : secret "infra-operator-webhook-server-cert" not found Mar 12 21:22:53.115183 master-0 kubenswrapper[31456]: I0312 21:22:53.112904 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:22:53.115183 master-0 kubenswrapper[31456]: E0312 21:22:53.113097 31456 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:53.115183 master-0 kubenswrapper[31456]: E0312 21:22:53.113150 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert podName:bf444169-4293-48aa-ac84-6c38836cd316 nodeName:}" failed. No retries permitted until 2026-03-12 21:23:01.113133071 +0000 UTC m=+842.187738399 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" (UID: "bf444169-4293-48aa-ac84-6c38836cd316") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:22:53.624643 master-0 kubenswrapper[31456]: I0312 21:22:53.624499 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:53.625317 master-0 kubenswrapper[31456]: E0312 21:22:53.624778 31456 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 21:22:53.625317 master-0 kubenswrapper[31456]: E0312 21:22:53.624889 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:23:01.62486688 +0000 UTC m=+842.699472208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "metrics-server-cert" not found Mar 12 21:22:53.625541 master-0 kubenswrapper[31456]: I0312 21:22:53.625449 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:22:53.625892 master-0 kubenswrapper[31456]: E0312 21:22:53.625750 31456 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 21:22:53.625892 master-0 kubenswrapper[31456]: E0312 21:22:53.625841 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:23:01.625821364 +0000 UTC m=+842.700426692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "webhook-server-cert" not found Mar 12 21:23:00.487385 master-0 kubenswrapper[31456]: I0312 21:23:00.487189 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:23:00.493705 master-0 kubenswrapper[31456]: I0312 21:23:00.493653 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/392ee3fe-88fb-47f2-834a-115559661320-cert\") pod \"infra-operator-controller-manager-b8c8d7cc8-d9mhv\" (UID: \"392ee3fe-88fb-47f2-834a-115559661320\") " pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:23:00.667829 master-0 kubenswrapper[31456]: I0312 21:23:00.667680 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:23:01.201671 master-0 kubenswrapper[31456]: I0312 21:23:01.201609 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:23:01.202521 master-0 kubenswrapper[31456]: E0312 21:23:01.201916 31456 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:23:01.202521 master-0 kubenswrapper[31456]: E0312 21:23:01.201977 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert podName:bf444169-4293-48aa-ac84-6c38836cd316 nodeName:}" failed. No retries permitted until 2026-03-12 21:23:17.201957599 +0000 UTC m=+858.276562927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert") pod "openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" (UID: "bf444169-4293-48aa-ac84-6c38836cd316") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 21:23:01.709938 master-0 kubenswrapper[31456]: I0312 21:23:01.709833 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:01.710654 master-0 kubenswrapper[31456]: I0312 21:23:01.710021 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:01.710654 master-0 kubenswrapper[31456]: E0312 21:23:01.710051 31456 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 21:23:01.710654 master-0 kubenswrapper[31456]: E0312 21:23:01.710144 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:23:17.710125803 +0000 UTC m=+858.784731131 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "webhook-server-cert" not found Mar 12 21:23:01.710654 master-0 kubenswrapper[31456]: E0312 21:23:01.710276 31456 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 21:23:01.710654 master-0 kubenswrapper[31456]: E0312 21:23:01.710341 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs podName:e3c680b7-4c4e-45a5-839c-a07be817bcab nodeName:}" failed. No retries permitted until 2026-03-12 21:23:17.710319357 +0000 UTC m=+858.784924745 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs") pod "openstack-operator-controller-manager-7795b46f77-zdpb2" (UID: "e3c680b7-4c4e-45a5-839c-a07be817bcab") : secret "metrics-server-cert" not found Mar 12 21:23:08.679542 master-0 kubenswrapper[31456]: I0312 21:23:08.678416 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv"] Mar 12 21:23:09.260949 master-0 kubenswrapper[31456]: I0312 21:23:09.260837 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" event={"ID":"958532da-0a93-4ca7-8f90-ec711c5e2424","Type":"ContainerStarted","Data":"6e7bdc2e4ba6b304bac23ec505e8fc057789111f5b3a8b9b65a05e399b1f6145"} Mar 12 21:23:09.264412 master-0 kubenswrapper[31456]: I0312 21:23:09.261928 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:23:09.273844 master-0 kubenswrapper[31456]: I0312 21:23:09.273230 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" event={"ID":"0bac1947-ff1d-40f0-8b0c-25132780f302","Type":"ContainerStarted","Data":"7f35175ce7e8d9a196d0bb5218009d65ff2fe70cfb424351076409cdded7ee89"} Mar 12 21:23:09.276961 master-0 kubenswrapper[31456]: I0312 21:23:09.274090 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:23:09.295828 master-0 kubenswrapper[31456]: I0312 21:23:09.295639 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" event={"ID":"e0249756-1137-4101-ab47-90c11635a800","Type":"ContainerStarted","Data":"990ff3e15ad4c25d77467cd2031f91bd3eaa7e6ca125407fa8603c9327523d60"} Mar 12 21:23:09.302819 master-0 kubenswrapper[31456]: I0312 21:23:09.296515 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:23:09.326318 master-0 kubenswrapper[31456]: I0312 21:23:09.324092 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" event={"ID":"171f4970-bb03-4ac6-86b1-47cf6639cccd","Type":"ContainerStarted","Data":"454091594eed002c7f3a3f3d7e98aa8be6d25d6af74b472d76704ec1a8c8ea21"} Mar 12 21:23:09.326318 master-0 kubenswrapper[31456]: I0312 21:23:09.324936 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:23:09.345509 master-0 kubenswrapper[31456]: I0312 21:23:09.337973 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" event={"ID":"6d01e27b-a1a1-4afb-a75b-4f7063e5c0d3","Type":"ContainerStarted","Data":"3d67faf59ca83a8784d6703758fd14df021fc66a591241283278e608f83abcf8"} Mar 12 21:23:09.345509 master-0 kubenswrapper[31456]: I0312 21:23:09.338727 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:23:09.366839 master-0 kubenswrapper[31456]: I0312 21:23:09.362062 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" podStartSLOduration=4.780391679 podStartE2EDuration="25.362045317s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.527718521 +0000 UTC m=+828.602323849" lastFinishedPulling="2026-03-12 21:23:08.109372159 +0000 UTC m=+849.183977487" observedRunningTime="2026-03-12 21:23:09.292108901 +0000 UTC m=+850.366714229" watchObservedRunningTime="2026-03-12 21:23:09.362045317 +0000 UTC m=+850.436650645" Mar 12 21:23:09.366839 master-0 kubenswrapper[31456]: I0312 21:23:09.364043 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" event={"ID":"be6f68c8-9c6c-4fa3-b1b5-2205851ae9d4","Type":"ContainerStarted","Data":"7213c117c360ce1889b436da5d76d58ea6e474fea1ba9bbf5fdbabb776417689"} Mar 12 21:23:09.366839 master-0 kubenswrapper[31456]: I0312 21:23:09.364472 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" podStartSLOduration=5.280357073 podStartE2EDuration="25.364465645s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.579019175 +0000 UTC m=+828.653624503" lastFinishedPulling="2026-03-12 21:23:07.663127737 +0000 UTC m=+848.737733075" observedRunningTime="2026-03-12 21:23:09.35928553 +0000 UTC m=+850.433890858" watchObservedRunningTime="2026-03-12 21:23:09.364465645 +0000 UTC m=+850.439070973" Mar 12 21:23:09.366839 master-0 kubenswrapper[31456]: I0312 21:23:09.364878 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:23:09.383833 master-0 kubenswrapper[31456]: I0312 21:23:09.383653 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" event={"ID":"f083454e-bbf9-4d06-b277-0303cfe15c31","Type":"ContainerStarted","Data":"8574b4adac231de36782d19c3d84ce433bb11ceca64f70a42ec1e63315cb412c"} Mar 12 21:23:09.391082 master-0 kubenswrapper[31456]: I0312 21:23:09.384223 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:23:09.411834 master-0 kubenswrapper[31456]: I0312 21:23:09.410970 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" event={"ID":"fc382ab4-53d2-4db1-b50d-651c61e8e4fd","Type":"ContainerStarted","Data":"65908b498f62e1bbaedb3c210f90e34d18a4c5835665527ba730e11fb8719373"} Mar 12 21:23:09.411834 master-0 kubenswrapper[31456]: I0312 21:23:09.411722 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:23:09.423332 master-0 kubenswrapper[31456]: I0312 21:23:09.422972 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" podStartSLOduration=5.174548727 podStartE2EDuration="25.422950454s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.42089299 +0000 UTC m=+828.495498318" lastFinishedPulling="2026-03-12 21:23:07.669294717 +0000 UTC m=+848.743900045" observedRunningTime="2026-03-12 21:23:09.42153901 +0000 UTC m=+850.496144338" watchObservedRunningTime="2026-03-12 21:23:09.422950454 +0000 UTC m=+850.497555782" Mar 12 21:23:09.446832 master-0 kubenswrapper[31456]: I0312 21:23:09.438111 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" event={"ID":"10dbd20e-560a-4ded-8caa-c72c8c11d865","Type":"ContainerStarted","Data":"37b3cdc76f86968ed66ba02f6c3edbb0ba48fd9d458ed29d6378d567c733e3d6"} Mar 12 21:23:09.446832 master-0 kubenswrapper[31456]: I0312 21:23:09.439090 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:23:09.459836 master-0 kubenswrapper[31456]: I0312 21:23:09.454076 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" event={"ID":"072c42ef-c704-4430-ae96-ba686e7a9e48","Type":"ContainerStarted","Data":"6b522a805526ce667913838daa73a4b1676260c46bedbc01ea17be1005eb7106"} Mar 12 21:23:09.459836 master-0 kubenswrapper[31456]: I0312 21:23:09.454921 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:23:09.497938 master-0 kubenswrapper[31456]: I0312 21:23:09.484775 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" event={"ID":"b981b7c7-773d-4c60-a591-3e6fbb6fdacd","Type":"ContainerStarted","Data":"3de9c97c8fddd7dfc438851b8bf6350c56ac70268cce9b4506b53f1e2b0ae0f1"} Mar 12 21:23:09.497938 master-0 kubenswrapper[31456]: I0312 21:23:09.485335 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:23:09.497938 master-0 kubenswrapper[31456]: I0312 21:23:09.486248 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" podStartSLOduration=3.883873967 podStartE2EDuration="25.486226528s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.507727256 +0000 UTC m=+827.582332584" lastFinishedPulling="2026-03-12 21:23:08.110079817 +0000 UTC m=+849.184685145" observedRunningTime="2026-03-12 21:23:09.471134783 +0000 UTC m=+850.545740111" watchObservedRunningTime="2026-03-12 21:23:09.486226528 +0000 UTC m=+850.560831856" Mar 12 21:23:09.536982 master-0 kubenswrapper[31456]: I0312 21:23:09.526745 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" event={"ID":"392ee3fe-88fb-47f2-834a-115559661320","Type":"ContainerStarted","Data":"91d677289cbe99569996619082ef013dc948a9b14755de74a41619840a1e6808"} Mar 12 21:23:09.551885 master-0 kubenswrapper[31456]: I0312 21:23:09.542991 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" podStartSLOduration=4.5964457880000005 podStartE2EDuration="25.542974495s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.717106603 +0000 UTC m=+827.791711931" lastFinishedPulling="2026-03-12 21:23:07.66363531 +0000 UTC m=+848.738240638" observedRunningTime="2026-03-12 21:23:09.542227796 +0000 UTC m=+850.616833144" watchObservedRunningTime="2026-03-12 21:23:09.542974495 +0000 UTC m=+850.617579823" Mar 12 21:23:09.564314 master-0 kubenswrapper[31456]: I0312 21:23:09.561877 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" event={"ID":"fee2ceac-7ca8-416a-a8aa-e80cc6b37755","Type":"ContainerStarted","Data":"60fdf83d470e306293d0d19d08e6acb92ac0b81746c06adb9c03eefdd9be0744"} Mar 12 21:23:09.564314 master-0 kubenswrapper[31456]: I0312 21:23:09.561939 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:23:09.587094 master-0 kubenswrapper[31456]: I0312 21:23:09.586640 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" event={"ID":"1ffc5e6f-cc66-4a0c-bb40-58a3782fcbcb","Type":"ContainerStarted","Data":"4717a37fc296aa838535d57ac6d9acb9d3897b1bcb6ef0aaa53f331384e3e48f"} Mar 12 21:23:09.605851 master-0 kubenswrapper[31456]: I0312 21:23:09.604741 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" event={"ID":"965c850a-6d1c-4824-b254-6bde6b919001","Type":"ContainerStarted","Data":"eebcaec8b7ac1169d8c8cc89b6d28d028f23e7c50eae3a033df1b76c6442cfa6"} Mar 12 21:23:09.605851 master-0 kubenswrapper[31456]: I0312 21:23:09.605493 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:23:09.638511 master-0 kubenswrapper[31456]: I0312 21:23:09.638441 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" event={"ID":"db5b5da6-7eaa-4e23-ad31-7e977fd52810","Type":"ContainerStarted","Data":"f2d7ee668664abd35ac9fb3041b88659ad7aeff3a8f41b2d44f9fe2d3218c722"} Mar 12 21:23:09.639879 master-0 kubenswrapper[31456]: I0312 21:23:09.639182 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:23:09.668986 master-0 kubenswrapper[31456]: I0312 21:23:09.662319 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" event={"ID":"82f17f4c-c741-4cc8-8b68-c26ca155288d","Type":"ContainerStarted","Data":"8994097039c683801b226c08f80cb0775c7f4a749ef3c7f1f25457a493638e5e"} Mar 12 21:23:09.668986 master-0 kubenswrapper[31456]: I0312 21:23:09.662498 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:23:09.711678 master-0 kubenswrapper[31456]: I0312 21:23:09.709249 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" event={"ID":"c0d101d2-bf56-4410-8499-987107f3bc9f","Type":"ContainerStarted","Data":"943637115a38f1423aa24ca5a8b0925027edc80ea27ff045f93657a0e7ef0cb0"} Mar 12 21:23:09.711678 master-0 kubenswrapper[31456]: I0312 21:23:09.711169 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:23:09.716441 master-0 kubenswrapper[31456]: I0312 21:23:09.716364 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" podStartSLOduration=5.054544878 podStartE2EDuration="25.716344679s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.51531261 +0000 UTC m=+828.589917938" lastFinishedPulling="2026-03-12 21:23:08.177112411 +0000 UTC m=+849.251717739" observedRunningTime="2026-03-12 21:23:09.690115003 +0000 UTC m=+850.764720341" watchObservedRunningTime="2026-03-12 21:23:09.716344679 +0000 UTC m=+850.790950007" Mar 12 21:23:09.740671 master-0 kubenswrapper[31456]: I0312 21:23:09.740621 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" event={"ID":"00bedb2f-42ce-446e-84ce-0511132bc5bd","Type":"ContainerStarted","Data":"23ad4cc08cdee75a97b31aa0a2477453ac290d33896476996911889fbbb1487a"} Mar 12 21:23:09.742621 master-0 kubenswrapper[31456]: I0312 21:23:09.742606 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:23:09.759312 master-0 kubenswrapper[31456]: I0312 21:23:09.758547 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" event={"ID":"e2d93f6b-adfe-4928-8921-2e1e2cf01682","Type":"ContainerStarted","Data":"08bf0859577760284a57cd68c1caca910c196ab7ec5804342e585613c8e334db"} Mar 12 21:23:09.769699 master-0 kubenswrapper[31456]: I0312 21:23:09.769608 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" podStartSLOduration=6.153299073 podStartE2EDuration="25.769589661s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.218323887 +0000 UTC m=+827.292929205" lastFinishedPulling="2026-03-12 21:23:05.834614435 +0000 UTC m=+846.909219793" observedRunningTime="2026-03-12 21:23:09.75800936 +0000 UTC m=+850.832614688" watchObservedRunningTime="2026-03-12 21:23:09.769589661 +0000 UTC m=+850.844194989" Mar 12 21:23:09.799518 master-0 kubenswrapper[31456]: I0312 21:23:09.797065 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" event={"ID":"772ae1ba-3abe-49d4-ade9-b0aac087acf2","Type":"ContainerStarted","Data":"8dafa2423742426b547d4d61ea73b7350dd46ed2a2a686aea0b9e59aafcfacfa"} Mar 12 21:23:09.799518 master-0 kubenswrapper[31456]: I0312 21:23:09.797478 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:23:09.923293 master-0 kubenswrapper[31456]: I0312 21:23:09.919429 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" podStartSLOduration=8.650413772 podStartE2EDuration="25.919409584s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.714510161 +0000 UTC m=+827.789115489" lastFinishedPulling="2026-03-12 21:23:03.983505973 +0000 UTC m=+845.058111301" observedRunningTime="2026-03-12 21:23:09.914368051 +0000 UTC m=+850.988973379" watchObservedRunningTime="2026-03-12 21:23:09.919409584 +0000 UTC m=+850.994014922" Mar 12 21:23:10.011828 master-0 kubenswrapper[31456]: I0312 21:23:10.009363 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" podStartSLOduration=4.570313925 podStartE2EDuration="26.009345345s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.224580389 +0000 UTC m=+827.299185717" lastFinishedPulling="2026-03-12 21:23:07.663611799 +0000 UTC m=+848.738217137" observedRunningTime="2026-03-12 21:23:09.971875016 +0000 UTC m=+851.046480344" watchObservedRunningTime="2026-03-12 21:23:10.009345345 +0000 UTC m=+851.083950663" Mar 12 21:23:10.082124 master-0 kubenswrapper[31456]: I0312 21:23:10.081950 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" podStartSLOduration=8.263686123 podStartE2EDuration="26.081930925s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:45.397156004 +0000 UTC m=+826.471761332" lastFinishedPulling="2026-03-12 21:23:03.215400806 +0000 UTC m=+844.290006134" observedRunningTime="2026-03-12 21:23:10.012388038 +0000 UTC m=+851.086993366" watchObservedRunningTime="2026-03-12 21:23:10.081930925 +0000 UTC m=+851.156536253" Mar 12 21:23:10.126401 master-0 kubenswrapper[31456]: I0312 21:23:10.126091 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" podStartSLOduration=4.947510411 podStartE2EDuration="26.126065105s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.484551023 +0000 UTC m=+827.559156351" lastFinishedPulling="2026-03-12 21:23:07.663105717 +0000 UTC m=+848.737711045" observedRunningTime="2026-03-12 21:23:10.077212311 +0000 UTC m=+851.151817639" watchObservedRunningTime="2026-03-12 21:23:10.126065105 +0000 UTC m=+851.200670433" Mar 12 21:23:10.148865 master-0 kubenswrapper[31456]: I0312 21:23:10.148776 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" podStartSLOduration=4.55647108 podStartE2EDuration="26.148759906s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.517094203 +0000 UTC m=+827.591699531" lastFinishedPulling="2026-03-12 21:23:08.109383029 +0000 UTC m=+849.183988357" observedRunningTime="2026-03-12 21:23:10.120534641 +0000 UTC m=+851.195139969" watchObservedRunningTime="2026-03-12 21:23:10.148759906 +0000 UTC m=+851.223365234" Mar 12 21:23:10.153820 master-0 kubenswrapper[31456]: I0312 21:23:10.150742 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" podStartSLOduration=4.529209347 podStartE2EDuration="26.150733984s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.042133354 +0000 UTC m=+827.116738682" lastFinishedPulling="2026-03-12 21:23:07.663657991 +0000 UTC m=+848.738263319" observedRunningTime="2026-03-12 21:23:10.148085409 +0000 UTC m=+851.222690747" watchObservedRunningTime="2026-03-12 21:23:10.150733984 +0000 UTC m=+851.225339312" Mar 12 21:23:10.188597 master-0 kubenswrapper[31456]: I0312 21:23:10.188322 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" podStartSLOduration=4.171449714 podStartE2EDuration="26.188299465s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:45.646247185 +0000 UTC m=+826.720852513" lastFinishedPulling="2026-03-12 21:23:07.663096936 +0000 UTC m=+848.737702264" observedRunningTime="2026-03-12 21:23:10.182189486 +0000 UTC m=+851.256794814" watchObservedRunningTime="2026-03-12 21:23:10.188299465 +0000 UTC m=+851.262904793" Mar 12 21:23:10.208449 master-0 kubenswrapper[31456]: I0312 21:23:10.208367 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" podStartSLOduration=4.591252222 podStartE2EDuration="26.20834869s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.492160698 +0000 UTC m=+827.566766026" lastFinishedPulling="2026-03-12 21:23:08.109257166 +0000 UTC m=+849.183862494" observedRunningTime="2026-03-12 21:23:10.207144431 +0000 UTC m=+851.281749759" watchObservedRunningTime="2026-03-12 21:23:10.20834869 +0000 UTC m=+851.282954018" Mar 12 21:23:10.258235 master-0 kubenswrapper[31456]: I0312 21:23:10.249167 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2nr4c" podStartSLOduration=4.498170326 podStartE2EDuration="25.24914981s" podCreationTimestamp="2026-03-12 21:22:45 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.504531139 +0000 UTC m=+828.579136467" lastFinishedPulling="2026-03-12 21:23:08.255510623 +0000 UTC m=+849.330115951" observedRunningTime="2026-03-12 21:23:10.243231267 +0000 UTC m=+851.317836595" watchObservedRunningTime="2026-03-12 21:23:10.24914981 +0000 UTC m=+851.323755138" Mar 12 21:23:10.441841 master-0 kubenswrapper[31456]: I0312 21:23:10.435327 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" podStartSLOduration=5.784460737 podStartE2EDuration="26.435312484s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.527867034 +0000 UTC m=+828.602472362" lastFinishedPulling="2026-03-12 21:23:08.178718751 +0000 UTC m=+849.253324109" observedRunningTime="2026-03-12 21:23:10.432136908 +0000 UTC m=+851.506742226" watchObservedRunningTime="2026-03-12 21:23:10.435312484 +0000 UTC m=+851.509917812" Mar 12 21:23:10.499205 master-0 kubenswrapper[31456]: I0312 21:23:10.499128 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" podStartSLOduration=5.055403639 podStartE2EDuration="26.499108852s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:46.693757087 +0000 UTC m=+827.768362415" lastFinishedPulling="2026-03-12 21:23:08.13746229 +0000 UTC m=+849.212067628" observedRunningTime="2026-03-12 21:23:10.46357916 +0000 UTC m=+851.538184488" watchObservedRunningTime="2026-03-12 21:23:10.499108852 +0000 UTC m=+851.573714180" Mar 12 21:23:10.522756 master-0 kubenswrapper[31456]: I0312 21:23:10.522685 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" podStartSLOduration=5.920162398 podStartE2EDuration="26.522665143s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.587364347 +0000 UTC m=+828.661969675" lastFinishedPulling="2026-03-12 21:23:08.189867092 +0000 UTC m=+849.264472420" observedRunningTime="2026-03-12 21:23:10.510136199 +0000 UTC m=+851.584741527" watchObservedRunningTime="2026-03-12 21:23:10.522665143 +0000 UTC m=+851.597270471" Mar 12 21:23:10.525034 master-0 kubenswrapper[31456]: I0312 21:23:10.525006 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" podStartSLOduration=5.994883351 podStartE2EDuration="26.525001049s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:22:47.579172449 +0000 UTC m=+828.653777777" lastFinishedPulling="2026-03-12 21:23:08.109290147 +0000 UTC m=+849.183895475" observedRunningTime="2026-03-12 21:23:10.487521761 +0000 UTC m=+851.562127089" watchObservedRunningTime="2026-03-12 21:23:10.525001049 +0000 UTC m=+851.599606377" Mar 12 21:23:10.814038 master-0 kubenswrapper[31456]: I0312 21:23:10.813874 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:23:12.827921 master-0 kubenswrapper[31456]: I0312 21:23:12.827768 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" event={"ID":"392ee3fe-88fb-47f2-834a-115559661320","Type":"ContainerStarted","Data":"3ab83f2fb374c6dc418fa65fdf3e5ed57bc7ebbf4106d6a32cb7a30f0657781b"} Mar 12 21:23:12.827921 master-0 kubenswrapper[31456]: I0312 21:23:12.827910 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:23:12.858607 master-0 kubenswrapper[31456]: I0312 21:23:12.858340 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" podStartSLOduration=25.142009702 podStartE2EDuration="28.858319084s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:23:08.791146143 +0000 UTC m=+849.865751471" lastFinishedPulling="2026-03-12 21:23:12.507455515 +0000 UTC m=+853.582060853" observedRunningTime="2026-03-12 21:23:12.849992712 +0000 UTC m=+853.924598050" watchObservedRunningTime="2026-03-12 21:23:12.858319084 +0000 UTC m=+853.932924412" Mar 12 21:23:14.670427 master-0 kubenswrapper[31456]: I0312 21:23:14.670341 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-4q2jn" Mar 12 21:23:14.683618 master-0 kubenswrapper[31456]: I0312 21:23:14.683554 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-d2l7t" Mar 12 21:23:14.739927 master-0 kubenswrapper[31456]: I0312 21:23:14.738710 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-dqp55" Mar 12 21:23:14.759863 master-0 kubenswrapper[31456]: I0312 21:23:14.759739 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-xc2vn" Mar 12 21:23:14.765883 master-0 kubenswrapper[31456]: I0312 21:23:14.765402 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-l57sv" Mar 12 21:23:14.819795 master-0 kubenswrapper[31456]: I0312 21:23:14.819725 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-29sh2" Mar 12 21:23:14.936494 master-0 kubenswrapper[31456]: I0312 21:23:14.934095 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-9qk2q" Mar 12 21:23:15.119697 master-0 kubenswrapper[31456]: I0312 21:23:15.119625 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-mp579" Mar 12 21:23:15.159856 master-0 kubenswrapper[31456]: I0312 21:23:15.147562 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-qn4x5" Mar 12 21:23:15.244929 master-0 kubenswrapper[31456]: I0312 21:23:15.228272 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-sjbws" Mar 12 21:23:15.453971 master-0 kubenswrapper[31456]: I0312 21:23:15.453888 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-l9rhw" Mar 12 21:23:15.561854 master-0 kubenswrapper[31456]: I0312 21:23:15.561495 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-97lcz" Mar 12 21:23:15.617147 master-0 kubenswrapper[31456]: I0312 21:23:15.617060 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-pjrnx" Mar 12 21:23:15.676802 master-0 kubenswrapper[31456]: I0312 21:23:15.676714 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-gtjq6" Mar 12 21:23:15.713595 master-0 kubenswrapper[31456]: I0312 21:23:15.713504 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-w6dxn" Mar 12 21:23:15.726514 master-0 kubenswrapper[31456]: I0312 21:23:15.725644 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-t72b8" Mar 12 21:23:15.748716 master-0 kubenswrapper[31456]: I0312 21:23:15.748646 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-4r28c" Mar 12 21:23:16.055957 master-0 kubenswrapper[31456]: I0312 21:23:16.055850 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-bxrjr" Mar 12 21:23:16.068266 master-0 kubenswrapper[31456]: I0312 21:23:16.068186 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-fnpjr" Mar 12 21:23:17.208403 master-0 kubenswrapper[31456]: I0312 21:23:17.208285 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:23:17.216935 master-0 kubenswrapper[31456]: I0312 21:23:17.215469 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf444169-4293-48aa-ac84-6c38836cd316-cert\") pod \"openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7\" (UID: \"bf444169-4293-48aa-ac84-6c38836cd316\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:23:17.338816 master-0 kubenswrapper[31456]: I0312 21:23:17.338706 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:23:17.719960 master-0 kubenswrapper[31456]: I0312 21:23:17.719695 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:17.720268 master-0 kubenswrapper[31456]: I0312 21:23:17.719975 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:17.724454 master-0 kubenswrapper[31456]: I0312 21:23:17.724392 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-webhook-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:17.725762 master-0 kubenswrapper[31456]: I0312 21:23:17.725691 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3c680b7-4c4e-45a5-839c-a07be817bcab-metrics-certs\") pod \"openstack-operator-controller-manager-7795b46f77-zdpb2\" (UID: \"e3c680b7-4c4e-45a5-839c-a07be817bcab\") " pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:17.892101 master-0 kubenswrapper[31456]: I0312 21:23:17.892011 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:18.130567 master-0 kubenswrapper[31456]: W0312 21:23:18.130489 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf444169_4293_48aa_ac84_6c38836cd316.slice/crio-ea06ee4cee433a2da0040851faeff60f95e4b039ffcefd1bf5c0e475d996e770 WatchSource:0}: Error finding container ea06ee4cee433a2da0040851faeff60f95e4b039ffcefd1bf5c0e475d996e770: Status 404 returned error can't find the container with id ea06ee4cee433a2da0040851faeff60f95e4b039ffcefd1bf5c0e475d996e770 Mar 12 21:23:18.137563 master-0 kubenswrapper[31456]: I0312 21:23:18.137489 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7"] Mar 12 21:23:18.436523 master-0 kubenswrapper[31456]: I0312 21:23:18.436456 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2"] Mar 12 21:23:18.450704 master-0 kubenswrapper[31456]: W0312 21:23:18.450609 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3c680b7_4c4e_45a5_839c_a07be817bcab.slice/crio-7c9f80b6a2fa45e8769c5d1449705a69365e46466716f710d3a53b5afa912e11 WatchSource:0}: Error finding container 7c9f80b6a2fa45e8769c5d1449705a69365e46466716f710d3a53b5afa912e11: Status 404 returned error can't find the container with id 7c9f80b6a2fa45e8769c5d1449705a69365e46466716f710d3a53b5afa912e11 Mar 12 21:23:18.930559 master-0 kubenswrapper[31456]: I0312 21:23:18.930482 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" event={"ID":"e3c680b7-4c4e-45a5-839c-a07be817bcab","Type":"ContainerStarted","Data":"6bd5a2c1877f73aa87a7a92da283b9572902de7c0bfb7b66640b9180767d46e5"} Mar 12 21:23:18.930559 master-0 kubenswrapper[31456]: I0312 21:23:18.930551 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" event={"ID":"e3c680b7-4c4e-45a5-839c-a07be817bcab","Type":"ContainerStarted","Data":"7c9f80b6a2fa45e8769c5d1449705a69365e46466716f710d3a53b5afa912e11"} Mar 12 21:23:18.930973 master-0 kubenswrapper[31456]: I0312 21:23:18.930673 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:23:18.931803 master-0 kubenswrapper[31456]: I0312 21:23:18.931726 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" event={"ID":"bf444169-4293-48aa-ac84-6c38836cd316","Type":"ContainerStarted","Data":"ea06ee4cee433a2da0040851faeff60f95e4b039ffcefd1bf5c0e475d996e770"} Mar 12 21:23:18.989574 master-0 kubenswrapper[31456]: I0312 21:23:18.989472 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" podStartSLOduration=33.989449568 podStartE2EDuration="33.989449568s" podCreationTimestamp="2026-03-12 21:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:23:18.980609243 +0000 UTC m=+860.055214571" watchObservedRunningTime="2026-03-12 21:23:18.989449568 +0000 UTC m=+860.064054906" Mar 12 21:23:20.675958 master-0 kubenswrapper[31456]: I0312 21:23:20.675885 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-b8c8d7cc8-d9mhv" Mar 12 21:23:21.965945 master-0 kubenswrapper[31456]: I0312 21:23:21.965887 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" event={"ID":"bf444169-4293-48aa-ac84-6c38836cd316","Type":"ContainerStarted","Data":"d93f03f628b9587baf85ff1d3bc9b415906a6c2f037d795c0e67643c00430d9f"} Mar 12 21:23:21.966655 master-0 kubenswrapper[31456]: I0312 21:23:21.966633 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:23:22.020325 master-0 kubenswrapper[31456]: I0312 21:23:22.016915 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" podStartSLOduration=34.854209329 podStartE2EDuration="38.016891455s" podCreationTimestamp="2026-03-12 21:22:44 +0000 UTC" firstStartedPulling="2026-03-12 21:23:18.153294761 +0000 UTC m=+859.227900099" lastFinishedPulling="2026-03-12 21:23:21.315976887 +0000 UTC m=+862.390582225" observedRunningTime="2026-03-12 21:23:22.01586601 +0000 UTC m=+863.090471368" watchObservedRunningTime="2026-03-12 21:23:22.016891455 +0000 UTC m=+863.091496813" Mar 12 21:23:27.349781 master-0 kubenswrapper[31456]: I0312 21:23:27.349722 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c969dbbcd-6ths7" Mar 12 21:23:27.904045 master-0 kubenswrapper[31456]: I0312 21:23:27.903970 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7795b46f77-zdpb2" Mar 12 21:24:09.737292 master-0 kubenswrapper[31456]: I0312 21:24:09.708712 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-gvpkv"] Mar 12 21:24:09.737292 master-0 kubenswrapper[31456]: I0312 21:24:09.716006 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.737292 master-0 kubenswrapper[31456]: I0312 21:24:09.717765 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 21:24:09.737292 master-0 kubenswrapper[31456]: I0312 21:24:09.717950 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 21:24:09.737292 master-0 kubenswrapper[31456]: I0312 21:24:09.731213 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-gvpkv"] Mar 12 21:24:09.758926 master-0 kubenswrapper[31456]: I0312 21:24:09.758889 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 21:24:09.775724 master-0 kubenswrapper[31456]: I0312 21:24:09.775671 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-config\") pod \"dnsmasq-dns-685c76cf85-gvpkv\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.775724 master-0 kubenswrapper[31456]: I0312 21:24:09.775721 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmzk4\" (UniqueName: \"kubernetes.io/projected/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-kube-api-access-tmzk4\") pod \"dnsmasq-dns-685c76cf85-gvpkv\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.801153 master-0 kubenswrapper[31456]: I0312 21:24:09.800445 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-69h5n"] Mar 12 21:24:09.803013 master-0 kubenswrapper[31456]: I0312 21:24:09.802963 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:09.806998 master-0 kubenswrapper[31456]: I0312 21:24:09.806963 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 21:24:09.837927 master-0 kubenswrapper[31456]: I0312 21:24:09.830880 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-69h5n"] Mar 12 21:24:09.881870 master-0 kubenswrapper[31456]: I0312 21:24:09.876989 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-config\") pod \"dnsmasq-dns-685c76cf85-gvpkv\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.881870 master-0 kubenswrapper[31456]: I0312 21:24:09.877096 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmzk4\" (UniqueName: \"kubernetes.io/projected/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-kube-api-access-tmzk4\") pod \"dnsmasq-dns-685c76cf85-gvpkv\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.881870 master-0 kubenswrapper[31456]: I0312 21:24:09.880153 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-config\") pod \"dnsmasq-dns-685c76cf85-gvpkv\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.905271 master-0 kubenswrapper[31456]: I0312 21:24:09.905226 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmzk4\" (UniqueName: \"kubernetes.io/projected/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-kube-api-access-tmzk4\") pod \"dnsmasq-dns-685c76cf85-gvpkv\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:09.981068 master-0 kubenswrapper[31456]: I0312 21:24:09.979570 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drfs\" (UniqueName: \"kubernetes.io/projected/d04e1418-b358-485d-9a03-ed37d0f15d96-kube-api-access-8drfs\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:09.981068 master-0 kubenswrapper[31456]: I0312 21:24:09.979630 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-config\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:09.981068 master-0 kubenswrapper[31456]: I0312 21:24:09.979684 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.081345 master-0 kubenswrapper[31456]: I0312 21:24:10.080997 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drfs\" (UniqueName: \"kubernetes.io/projected/d04e1418-b358-485d-9a03-ed37d0f15d96-kube-api-access-8drfs\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.081345 master-0 kubenswrapper[31456]: I0312 21:24:10.081049 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-config\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.081345 master-0 kubenswrapper[31456]: I0312 21:24:10.081227 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.082074 master-0 kubenswrapper[31456]: I0312 21:24:10.082040 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.082247 master-0 kubenswrapper[31456]: I0312 21:24:10.082202 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-config\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.096252 master-0 kubenswrapper[31456]: I0312 21:24:10.096202 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drfs\" (UniqueName: \"kubernetes.io/projected/d04e1418-b358-485d-9a03-ed37d0f15d96-kube-api-access-8drfs\") pod \"dnsmasq-dns-8476fd89bc-69h5n\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.099276 master-0 kubenswrapper[31456]: I0312 21:24:10.099213 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:10.154380 master-0 kubenswrapper[31456]: I0312 21:24:10.154311 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:10.574448 master-0 kubenswrapper[31456]: I0312 21:24:10.574392 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-gvpkv"] Mar 12 21:24:10.610004 master-0 kubenswrapper[31456]: I0312 21:24:10.609907 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" event={"ID":"d282c2c6-09bd-4fa5-a4e2-0dd250332ade","Type":"ContainerStarted","Data":"2f80e98b6599157f9792a50a92dae11e8324b1de1d5043176efe65cb588b83f6"} Mar 12 21:24:10.672762 master-0 kubenswrapper[31456]: I0312 21:24:10.672707 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-69h5n"] Mar 12 21:24:10.674842 master-0 kubenswrapper[31456]: W0312 21:24:10.674792 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd04e1418_b358_485d_9a03_ed37d0f15d96.slice/crio-eca9d83043cb77e624bcffcf8479a2ef45621efcdda3bed02738a902a1a133cd WatchSource:0}: Error finding container eca9d83043cb77e624bcffcf8479a2ef45621efcdda3bed02738a902a1a133cd: Status 404 returned error can't find the container with id eca9d83043cb77e624bcffcf8479a2ef45621efcdda3bed02738a902a1a133cd Mar 12 21:24:11.631152 master-0 kubenswrapper[31456]: I0312 21:24:11.631065 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" event={"ID":"d04e1418-b358-485d-9a03-ed37d0f15d96","Type":"ContainerStarted","Data":"eca9d83043cb77e624bcffcf8479a2ef45621efcdda3bed02738a902a1a133cd"} Mar 12 21:24:11.660482 master-0 kubenswrapper[31456]: I0312 21:24:11.660395 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-gvpkv"] Mar 12 21:24:11.707350 master-0 kubenswrapper[31456]: I0312 21:24:11.707304 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-9gbwl"] Mar 12 21:24:11.721778 master-0 kubenswrapper[31456]: I0312 21:24:11.718225 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.734015 master-0 kubenswrapper[31456]: I0312 21:24:11.730552 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-config\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.734015 master-0 kubenswrapper[31456]: I0312 21:24:11.730781 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf49k\" (UniqueName: \"kubernetes.io/projected/3db71973-0c81-4806-b0f5-435f08829dcc-kube-api-access-cf49k\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.734015 master-0 kubenswrapper[31456]: I0312 21:24:11.730910 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.737927 master-0 kubenswrapper[31456]: I0312 21:24:11.737432 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-9gbwl"] Mar 12 21:24:11.833673 master-0 kubenswrapper[31456]: I0312 21:24:11.833611 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-config\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.834052 master-0 kubenswrapper[31456]: I0312 21:24:11.833701 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf49k\" (UniqueName: \"kubernetes.io/projected/3db71973-0c81-4806-b0f5-435f08829dcc-kube-api-access-cf49k\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.834052 master-0 kubenswrapper[31456]: I0312 21:24:11.833756 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.834691 master-0 kubenswrapper[31456]: I0312 21:24:11.834645 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-config\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.838698 master-0 kubenswrapper[31456]: I0312 21:24:11.838655 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-dns-svc\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:11.854325 master-0 kubenswrapper[31456]: I0312 21:24:11.854268 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf49k\" (UniqueName: \"kubernetes.io/projected/3db71973-0c81-4806-b0f5-435f08829dcc-kube-api-access-cf49k\") pod \"dnsmasq-dns-586dbdbb8c-9gbwl\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:12.058754 master-0 kubenswrapper[31456]: I0312 21:24:12.058634 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:12.445629 master-0 kubenswrapper[31456]: I0312 21:24:12.445551 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-69h5n"] Mar 12 21:24:12.467237 master-0 kubenswrapper[31456]: I0312 21:24:12.466483 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg"] Mar 12 21:24:12.468076 master-0 kubenswrapper[31456]: I0312 21:24:12.467992 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.514383 master-0 kubenswrapper[31456]: I0312 21:24:12.511921 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg"] Mar 12 21:24:12.554560 master-0 kubenswrapper[31456]: I0312 21:24:12.554346 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-config\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.554560 master-0 kubenswrapper[31456]: I0312 21:24:12.554438 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.554560 master-0 kubenswrapper[31456]: I0312 21:24:12.554472 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k78h4\" (UniqueName: \"kubernetes.io/projected/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-kube-api-access-k78h4\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.656052 master-0 kubenswrapper[31456]: I0312 21:24:12.655906 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-config\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.656052 master-0 kubenswrapper[31456]: I0312 21:24:12.656001 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.656052 master-0 kubenswrapper[31456]: I0312 21:24:12.656035 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k78h4\" (UniqueName: \"kubernetes.io/projected/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-kube-api-access-k78h4\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.656919 master-0 kubenswrapper[31456]: I0312 21:24:12.656874 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-config\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.661166 master-0 kubenswrapper[31456]: I0312 21:24:12.658225 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.696243 master-0 kubenswrapper[31456]: I0312 21:24:12.685776 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k78h4\" (UniqueName: \"kubernetes.io/projected/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-kube-api-access-k78h4\") pod \"dnsmasq-dns-6ff8fd9d5c-t8rdg\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:12.747899 master-0 kubenswrapper[31456]: I0312 21:24:12.746758 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-9gbwl"] Mar 12 21:24:12.795505 master-0 kubenswrapper[31456]: I0312 21:24:12.795409 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:13.307364 master-0 kubenswrapper[31456]: I0312 21:24:13.307301 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg"] Mar 12 21:24:13.711518 master-0 kubenswrapper[31456]: I0312 21:24:13.710731 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" event={"ID":"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70","Type":"ContainerStarted","Data":"5bf08e0ea69de1e64801266ffaf044c31b5694932859dc1cf44a81242f31638a"} Mar 12 21:24:13.718094 master-0 kubenswrapper[31456]: I0312 21:24:13.713727 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" event={"ID":"3db71973-0c81-4806-b0f5-435f08829dcc","Type":"ContainerStarted","Data":"7ce94fb33ccb49a46e3298493b35fe3f090add45fc471287ba4778204dc70d5f"} Mar 12 21:24:15.875862 master-0 kubenswrapper[31456]: I0312 21:24:15.875762 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 21:24:15.891050 master-0 kubenswrapper[31456]: I0312 21:24:15.890953 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 21:24:15.892676 master-0 kubenswrapper[31456]: I0312 21:24:15.892595 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 21:24:15.912967 master-0 kubenswrapper[31456]: I0312 21:24:15.911889 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 21:24:15.912967 master-0 kubenswrapper[31456]: I0312 21:24:15.912253 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 21:24:15.912967 master-0 kubenswrapper[31456]: I0312 21:24:15.912395 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 21:24:15.912967 master-0 kubenswrapper[31456]: I0312 21:24:15.912518 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 21:24:15.913493 master-0 kubenswrapper[31456]: I0312 21:24:15.913358 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 21:24:15.913830 master-0 kubenswrapper[31456]: I0312 21:24:15.913705 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 21:24:16.049779 master-0 kubenswrapper[31456]: I0312 21:24:16.049689 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzh4\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-kube-api-access-grzh4\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050039 master-0 kubenswrapper[31456]: I0312 21:24:16.049883 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b2b24e9-2d8b-4ee6-ba3d-dd7a87219a38\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f369ddb7-174b-4666-9d44-2885f783cea6\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050039 master-0 kubenswrapper[31456]: I0312 21:24:16.049944 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050255 master-0 kubenswrapper[31456]: I0312 21:24:16.050121 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e067175-5771-473f-85a8-af63a27ee30a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050255 master-0 kubenswrapper[31456]: I0312 21:24:16.050156 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050255 master-0 kubenswrapper[31456]: I0312 21:24:16.050181 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050255 master-0 kubenswrapper[31456]: I0312 21:24:16.050199 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050255 master-0 kubenswrapper[31456]: I0312 21:24:16.050221 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e067175-5771-473f-85a8-af63a27ee30a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050431 master-0 kubenswrapper[31456]: I0312 21:24:16.050287 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-config-data\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050431 master-0 kubenswrapper[31456]: I0312 21:24:16.050307 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.050553 master-0 kubenswrapper[31456]: I0312 21:24:16.050479 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153312 master-0 kubenswrapper[31456]: I0312 21:24:16.153058 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153312 master-0 kubenswrapper[31456]: I0312 21:24:16.153134 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grzh4\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-kube-api-access-grzh4\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153312 master-0 kubenswrapper[31456]: I0312 21:24:16.153307 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b2b24e9-2d8b-4ee6-ba3d-dd7a87219a38\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f369ddb7-174b-4666-9d44-2885f783cea6\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153332 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153396 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e067175-5771-473f-85a8-af63a27ee30a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153415 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153433 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153450 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153468 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e067175-5771-473f-85a8-af63a27ee30a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153496 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-config-data\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.153778 master-0 kubenswrapper[31456]: I0312 21:24:16.153513 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.154999 master-0 kubenswrapper[31456]: I0312 21:24:16.154597 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.155662 master-0 kubenswrapper[31456]: I0312 21:24:16.155636 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.158640 master-0 kubenswrapper[31456]: I0312 21:24:16.156904 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e067175-5771-473f-85a8-af63a27ee30a-config-data\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.158640 master-0 kubenswrapper[31456]: I0312 21:24:16.157719 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.158640 master-0 kubenswrapper[31456]: I0312 21:24:16.158219 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.164044 master-0 kubenswrapper[31456]: I0312 21:24:16.161439 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:16.164044 master-0 kubenswrapper[31456]: I0312 21:24:16.161484 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b2b24e9-2d8b-4ee6-ba3d-dd7a87219a38\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f369ddb7-174b-4666-9d44-2885f783cea6\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6418d93f47d9227cd0808a6e0e36c5887c02bfe116bfcd9f914bc63a8a8b5c31/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.182461 master-0 kubenswrapper[31456]: I0312 21:24:16.182011 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8e067175-5771-473f-85a8-af63a27ee30a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.182461 master-0 kubenswrapper[31456]: I0312 21:24:16.182375 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.183613 master-0 kubenswrapper[31456]: I0312 21:24:16.183460 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.185061 master-0 kubenswrapper[31456]: I0312 21:24:16.184982 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8e067175-5771-473f-85a8-af63a27ee30a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.188202 master-0 kubenswrapper[31456]: I0312 21:24:16.188067 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grzh4\" (UniqueName: \"kubernetes.io/projected/8e067175-5771-473f-85a8-af63a27ee30a-kube-api-access-grzh4\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:16.521981 master-0 kubenswrapper[31456]: I0312 21:24:16.520880 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 21:24:16.522710 master-0 kubenswrapper[31456]: I0312 21:24:16.522458 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 21:24:16.560578 master-0 kubenswrapper[31456]: I0312 21:24:16.560412 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 21:24:16.561933 master-0 kubenswrapper[31456]: I0312 21:24:16.561556 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 21:24:16.563389 master-0 kubenswrapper[31456]: I0312 21:24:16.563341 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 21:24:16.567424 master-0 kubenswrapper[31456]: I0312 21:24:16.567375 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 21:24:16.676662 master-0 kubenswrapper[31456]: I0312 21:24:16.676609 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56559631-1206-49f6-8ebe-b5767087ef8e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.676989 master-0 kubenswrapper[31456]: I0312 21:24:16.676750 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/56559631-1206-49f6-8ebe-b5767087ef8e-kolla-config\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.676989 master-0 kubenswrapper[31456]: I0312 21:24:16.676968 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hk2d\" (UniqueName: \"kubernetes.io/projected/56559631-1206-49f6-8ebe-b5767087ef8e-kube-api-access-8hk2d\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.677145 master-0 kubenswrapper[31456]: I0312 21:24:16.676987 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/56559631-1206-49f6-8ebe-b5767087ef8e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.677145 master-0 kubenswrapper[31456]: I0312 21:24:16.677008 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56559631-1206-49f6-8ebe-b5767087ef8e-config-data\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.702562 master-0 kubenswrapper[31456]: I0312 21:24:16.702477 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 21:24:16.705771 master-0 kubenswrapper[31456]: I0312 21:24:16.705677 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.709216 master-0 kubenswrapper[31456]: I0312 21:24:16.709133 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 21:24:16.709542 master-0 kubenswrapper[31456]: I0312 21:24:16.709327 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 21:24:16.709542 master-0 kubenswrapper[31456]: I0312 21:24:16.709456 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 21:24:16.710145 master-0 kubenswrapper[31456]: I0312 21:24:16.710126 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 21:24:16.710862 master-0 kubenswrapper[31456]: I0312 21:24:16.710422 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 21:24:16.710862 master-0 kubenswrapper[31456]: I0312 21:24:16.710470 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 21:24:16.729529 master-0 kubenswrapper[31456]: I0312 21:24:16.729466 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 21:24:16.783747 master-0 kubenswrapper[31456]: I0312 21:24:16.783498 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56559631-1206-49f6-8ebe-b5767087ef8e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.784006 master-0 kubenswrapper[31456]: I0312 21:24:16.783726 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/56559631-1206-49f6-8ebe-b5767087ef8e-kolla-config\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.784006 master-0 kubenswrapper[31456]: I0312 21:24:16.783917 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hk2d\" (UniqueName: \"kubernetes.io/projected/56559631-1206-49f6-8ebe-b5767087ef8e-kube-api-access-8hk2d\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.784502 master-0 kubenswrapper[31456]: I0312 21:24:16.784476 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/56559631-1206-49f6-8ebe-b5767087ef8e-kolla-config\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.784944 master-0 kubenswrapper[31456]: I0312 21:24:16.783946 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/56559631-1206-49f6-8ebe-b5767087ef8e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.784944 master-0 kubenswrapper[31456]: I0312 21:24:16.784883 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56559631-1206-49f6-8ebe-b5767087ef8e-config-data\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.788483 master-0 kubenswrapper[31456]: I0312 21:24:16.787100 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56559631-1206-49f6-8ebe-b5767087ef8e-config-data\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.794154 master-0 kubenswrapper[31456]: I0312 21:24:16.794117 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/56559631-1206-49f6-8ebe-b5767087ef8e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.818131 master-0 kubenswrapper[31456]: I0312 21:24:16.818080 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56559631-1206-49f6-8ebe-b5767087ef8e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.825724 master-0 kubenswrapper[31456]: I0312 21:24:16.825609 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hk2d\" (UniqueName: \"kubernetes.io/projected/56559631-1206-49f6-8ebe-b5767087ef8e-kube-api-access-8hk2d\") pod \"memcached-0\" (UID: \"56559631-1206-49f6-8ebe-b5767087ef8e\") " pod="openstack/memcached-0" Mar 12 21:24:16.886821 master-0 kubenswrapper[31456]: I0312 21:24:16.886744 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.886821 master-0 kubenswrapper[31456]: I0312 21:24:16.886805 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.887376 master-0 kubenswrapper[31456]: I0312 21:24:16.886916 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df2f8731-bc77-47c0-b324-da3a445a2e3c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^597f314c-17ff-4730-abe5-aa4000d1c5ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.887376 master-0 kubenswrapper[31456]: I0312 21:24:16.886944 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.887376 master-0 kubenswrapper[31456]: I0312 21:24:16.887027 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npqhw\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-kube-api-access-npqhw\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.887975 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.888233 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.888471 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.888523 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.888554 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.888575 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.898615 master-0 kubenswrapper[31456]: I0312 21:24:16.893449 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 21:24:16.991985 master-0 kubenswrapper[31456]: I0312 21:24:16.991657 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.991985 master-0 kubenswrapper[31456]: I0312 21:24:16.991776 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npqhw\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-kube-api-access-npqhw\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.991985 master-0 kubenswrapper[31456]: I0312 21:24:16.991860 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.991985 master-0 kubenswrapper[31456]: I0312 21:24:16.991884 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.991985 master-0 kubenswrapper[31456]: I0312 21:24:16.991953 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.991985 master-0 kubenswrapper[31456]: I0312 21:24:16.991982 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.992348 master-0 kubenswrapper[31456]: I0312 21:24:16.992016 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.992348 master-0 kubenswrapper[31456]: I0312 21:24:16.992037 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.992348 master-0 kubenswrapper[31456]: I0312 21:24:16.992055 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.992348 master-0 kubenswrapper[31456]: I0312 21:24:16.992076 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.992348 master-0 kubenswrapper[31456]: I0312 21:24:16.992148 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-df2f8731-bc77-47c0-b324-da3a445a2e3c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^597f314c-17ff-4730-abe5-aa4000d1c5ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:16.993498 master-0 kubenswrapper[31456]: I0312 21:24:16.993406 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.003004 master-0 kubenswrapper[31456]: I0312 21:24:16.997107 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.003004 master-0 kubenswrapper[31456]: I0312 21:24:16.997326 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.003004 master-0 kubenswrapper[31456]: I0312 21:24:16.997977 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.003004 master-0 kubenswrapper[31456]: I0312 21:24:16.998290 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.004194 master-0 kubenswrapper[31456]: I0312 21:24:17.004013 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:17.004314 master-0 kubenswrapper[31456]: I0312 21:24:17.004193 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-df2f8731-bc77-47c0-b324-da3a445a2e3c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^597f314c-17ff-4730-abe5-aa4000d1c5ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/309b500c11ae16b91b7d0f4c8dbfd162d7c1a4f042d0f717b8866e4bbc477045/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.005722 master-0 kubenswrapper[31456]: I0312 21:24:17.005672 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.017886 master-0 kubenswrapper[31456]: I0312 21:24:17.017132 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.019833 master-0 kubenswrapper[31456]: I0312 21:24:17.019776 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npqhw\" (UniqueName: \"kubernetes.io/projected/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-kube-api-access-npqhw\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.020596 master-0 kubenswrapper[31456]: I0312 21:24:17.020283 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.032942 master-0 kubenswrapper[31456]: I0312 21:24:17.032875 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:17.715932 master-0 kubenswrapper[31456]: I0312 21:24:17.715739 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b2b24e9-2d8b-4ee6-ba3d-dd7a87219a38\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f369ddb7-174b-4666-9d44-2885f783cea6\") pod \"rabbitmq-server-0\" (UID: \"8e067175-5771-473f-85a8-af63a27ee30a\") " pod="openstack/rabbitmq-server-0" Mar 12 21:24:17.755638 master-0 kubenswrapper[31456]: I0312 21:24:17.755565 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 21:24:17.913376 master-0 kubenswrapper[31456]: I0312 21:24:17.913229 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 21:24:17.916543 master-0 kubenswrapper[31456]: I0312 21:24:17.916507 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 21:24:17.920760 master-0 kubenswrapper[31456]: I0312 21:24:17.920723 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 21:24:17.920926 master-0 kubenswrapper[31456]: I0312 21:24:17.920885 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 21:24:17.922693 master-0 kubenswrapper[31456]: I0312 21:24:17.922654 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 21:24:17.941934 master-0 kubenswrapper[31456]: I0312 21:24:17.938738 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 21:24:18.033402 master-0 kubenswrapper[31456]: I0312 21:24:18.031595 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-kolla-config\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033402 master-0 kubenswrapper[31456]: I0312 21:24:18.031701 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5837dd6c-30f0-4736-a8de-2ddb74041d5e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033402 master-0 kubenswrapper[31456]: I0312 21:24:18.031764 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5837dd6c-30f0-4736-a8de-2ddb74041d5e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033402 master-0 kubenswrapper[31456]: I0312 21:24:18.032923 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5837dd6c-30f0-4736-a8de-2ddb74041d5e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033402 master-0 kubenswrapper[31456]: I0312 21:24:18.033212 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-config-data-default\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033402 master-0 kubenswrapper[31456]: I0312 21:24:18.033297 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98xg\" (UniqueName: \"kubernetes.io/projected/5837dd6c-30f0-4736-a8de-2ddb74041d5e-kube-api-access-w98xg\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033778 master-0 kubenswrapper[31456]: I0312 21:24:18.033451 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-764c3f45-e53d-4caa-ad31-aa876b53af41\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8edc87be-b89b-4ee9-b1bf-276e268c68d9\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.033778 master-0 kubenswrapper[31456]: I0312 21:24:18.033531 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.139515 master-0 kubenswrapper[31456]: I0312 21:24:18.137542 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-config-data-default\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.139515 master-0 kubenswrapper[31456]: I0312 21:24:18.137629 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98xg\" (UniqueName: \"kubernetes.io/projected/5837dd6c-30f0-4736-a8de-2ddb74041d5e-kube-api-access-w98xg\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.139515 master-0 kubenswrapper[31456]: I0312 21:24:18.137689 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-764c3f45-e53d-4caa-ad31-aa876b53af41\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8edc87be-b89b-4ee9-b1bf-276e268c68d9\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.139515 master-0 kubenswrapper[31456]: I0312 21:24:18.137727 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.139515 master-0 kubenswrapper[31456]: I0312 21:24:18.139441 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:18.139515 master-0 kubenswrapper[31456]: I0312 21:24:18.139474 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-764c3f45-e53d-4caa-ad31-aa876b53af41\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8edc87be-b89b-4ee9-b1bf-276e268c68d9\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/4f3bc7c5fdc157f3395ea6d5cf20a79d39002b043cf0a373d4385283f8f9737a/globalmount\"" pod="openstack/openstack-galera-0" Mar 12 21:24:18.140291 master-0 kubenswrapper[31456]: I0312 21:24:18.140093 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-config-data-default\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.140536 master-0 kubenswrapper[31456]: I0312 21:24:18.140502 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.142050 master-0 kubenswrapper[31456]: I0312 21:24:18.142029 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-kolla-config\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.142220 master-0 kubenswrapper[31456]: I0312 21:24:18.142201 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5837dd6c-30f0-4736-a8de-2ddb74041d5e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.142359 master-0 kubenswrapper[31456]: I0312 21:24:18.142255 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5837dd6c-30f0-4736-a8de-2ddb74041d5e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.142359 master-0 kubenswrapper[31456]: I0312 21:24:18.142344 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5837dd6c-30f0-4736-a8de-2ddb74041d5e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.144203 master-0 kubenswrapper[31456]: I0312 21:24:18.142567 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5837dd6c-30f0-4736-a8de-2ddb74041d5e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.144203 master-0 kubenswrapper[31456]: I0312 21:24:18.142763 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5837dd6c-30f0-4736-a8de-2ddb74041d5e-kolla-config\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.150891 master-0 kubenswrapper[31456]: I0312 21:24:18.150854 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5837dd6c-30f0-4736-a8de-2ddb74041d5e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.155721 master-0 kubenswrapper[31456]: I0312 21:24:18.155433 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5837dd6c-30f0-4736-a8de-2ddb74041d5e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.160240 master-0 kubenswrapper[31456]: I0312 21:24:18.160185 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98xg\" (UniqueName: \"kubernetes.io/projected/5837dd6c-30f0-4736-a8de-2ddb74041d5e-kube-api-access-w98xg\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:18.510190 master-0 kubenswrapper[31456]: I0312 21:24:18.510108 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 21:24:18.511911 master-0 kubenswrapper[31456]: I0312 21:24:18.511788 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.529608 master-0 kubenswrapper[31456]: I0312 21:24:18.529557 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 21:24:18.566648 master-0 kubenswrapper[31456]: I0312 21:24:18.561642 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 21:24:18.566648 master-0 kubenswrapper[31456]: I0312 21:24:18.562944 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 21:24:18.566648 master-0 kubenswrapper[31456]: I0312 21:24:18.563245 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 21:24:18.665461 master-0 kubenswrapper[31456]: I0312 21:24:18.664478 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.665715 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.665755 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbqp\" (UniqueName: \"kubernetes.io/projected/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-kube-api-access-zhbqp\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.665775 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.665835 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.665990 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.666054 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.666401 master-0 kubenswrapper[31456]: I0312 21:24:18.666078 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-01da8e42-5520-42ff-83b9-3b8509036b21\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e126e683-9136-4f8e-a068-752fa7e96e66\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.770170 master-0 kubenswrapper[31456]: I0312 21:24:18.770018 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.770170 master-0 kubenswrapper[31456]: I0312 21:24:18.770148 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.770422 master-0 kubenswrapper[31456]: I0312 21:24:18.770276 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbqp\" (UniqueName: \"kubernetes.io/projected/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-kube-api-access-zhbqp\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.770422 master-0 kubenswrapper[31456]: I0312 21:24:18.770324 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.770422 master-0 kubenswrapper[31456]: I0312 21:24:18.770412 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.771640 master-0 kubenswrapper[31456]: I0312 21:24:18.770658 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.772197 master-0 kubenswrapper[31456]: I0312 21:24:18.771935 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.772197 master-0 kubenswrapper[31456]: I0312 21:24:18.772147 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.773729 master-0 kubenswrapper[31456]: I0312 21:24:18.773655 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.773729 master-0 kubenswrapper[31456]: I0312 21:24:18.772938 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.773940 master-0 kubenswrapper[31456]: I0312 21:24:18.773762 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-01da8e42-5520-42ff-83b9-3b8509036b21\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e126e683-9136-4f8e-a068-752fa7e96e66\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.775442 master-0 kubenswrapper[31456]: I0312 21:24:18.775382 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.775673 master-0 kubenswrapper[31456]: I0312 21:24:18.775590 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:18.775673 master-0 kubenswrapper[31456]: I0312 21:24:18.775618 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-01da8e42-5520-42ff-83b9-3b8509036b21\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e126e683-9136-4f8e-a068-752fa7e96e66\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/65833d79fe3e497c4d6fb3b72a688116f57f111827c5013898dcd929f68f37e6/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.775847 master-0 kubenswrapper[31456]: I0312 21:24:18.775824 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.783160 master-0 kubenswrapper[31456]: I0312 21:24:18.783107 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:18.796440 master-0 kubenswrapper[31456]: I0312 21:24:18.796353 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbqp\" (UniqueName: \"kubernetes.io/projected/4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d-kube-api-access-zhbqp\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:19.130545 master-0 kubenswrapper[31456]: I0312 21:24:19.130491 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-df2f8731-bc77-47c0-b324-da3a445a2e3c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^597f314c-17ff-4730-abe5-aa4000d1c5ed\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:19.139490 master-0 kubenswrapper[31456]: I0312 21:24:19.139441 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:24:20.133697 master-0 kubenswrapper[31456]: I0312 21:24:20.133627 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-764c3f45-e53d-4caa-ad31-aa876b53af41\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8edc87be-b89b-4ee9-b1bf-276e268c68d9\") pod \"openstack-galera-0\" (UID: \"5837dd6c-30f0-4736-a8de-2ddb74041d5e\") " pod="openstack/openstack-galera-0" Mar 12 21:24:20.360856 master-0 kubenswrapper[31456]: I0312 21:24:20.355113 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 21:24:21.167958 master-0 kubenswrapper[31456]: I0312 21:24:21.167788 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-01da8e42-5520-42ff-83b9-3b8509036b21\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e126e683-9136-4f8e-a068-752fa7e96e66\") pod \"openstack-cell1-galera-0\" (UID: \"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d\") " pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:21.291758 master-0 kubenswrapper[31456]: I0312 21:24:21.291691 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:22.331951 master-0 kubenswrapper[31456]: I0312 21:24:22.331867 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b7rpf"] Mar 12 21:24:22.333437 master-0 kubenswrapper[31456]: I0312 21:24:22.333385 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.345837 master-0 kubenswrapper[31456]: I0312 21:24:22.343472 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rdl65"] Mar 12 21:24:22.354848 master-0 kubenswrapper[31456]: I0312 21:24:22.346123 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.361885 master-0 kubenswrapper[31456]: I0312 21:24:22.360198 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7rpf"] Mar 12 21:24:22.361885 master-0 kubenswrapper[31456]: I0312 21:24:22.360509 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 21:24:22.361885 master-0 kubenswrapper[31456]: I0312 21:24:22.360718 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 21:24:22.383325 master-0 kubenswrapper[31456]: I0312 21:24:22.381903 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rdl65"] Mar 12 21:24:22.484583 master-0 kubenswrapper[31456]: I0312 21:24:22.484513 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-log\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.484583 master-0 kubenswrapper[31456]: I0312 21:24:22.484572 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-lib\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.484583 master-0 kubenswrapper[31456]: I0312 21:24:22.484600 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-log-ovn\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.486902 master-0 kubenswrapper[31456]: I0312 21:24:22.486599 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-scripts\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.486902 master-0 kubenswrapper[31456]: I0312 21:24:22.486677 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvz6s\" (UniqueName: \"kubernetes.io/projected/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-kube-api-access-lvz6s\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.486902 master-0 kubenswrapper[31456]: I0312 21:24:22.486776 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-etc-ovs\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.487215 master-0 kubenswrapper[31456]: I0312 21:24:22.486980 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpw5\" (UniqueName: \"kubernetes.io/projected/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-kube-api-access-rxpw5\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.487215 master-0 kubenswrapper[31456]: I0312 21:24:22.487015 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-ovn-controller-tls-certs\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.487215 master-0 kubenswrapper[31456]: I0312 21:24:22.487040 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-run-ovn\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.487215 master-0 kubenswrapper[31456]: I0312 21:24:22.487097 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-combined-ca-bundle\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.487215 master-0 kubenswrapper[31456]: I0312 21:24:22.487166 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-run\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.487411 master-0 kubenswrapper[31456]: I0312 21:24:22.487257 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-scripts\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.487411 master-0 kubenswrapper[31456]: I0312 21:24:22.487373 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-run\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.589008 master-0 kubenswrapper[31456]: I0312 21:24:22.588723 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-run\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.589008 master-0 kubenswrapper[31456]: I0312 21:24:22.588800 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-log\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.589008 master-0 kubenswrapper[31456]: I0312 21:24:22.588870 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-lib\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.589008 master-0 kubenswrapper[31456]: I0312 21:24:22.588893 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-log-ovn\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.589413 master-0 kubenswrapper[31456]: I0312 21:24:22.589353 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-log\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.589468 master-0 kubenswrapper[31456]: I0312 21:24:22.589402 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-run\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.589468 master-0 kubenswrapper[31456]: I0312 21:24:22.589421 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-scripts\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.589580 master-0 kubenswrapper[31456]: I0312 21:24:22.589512 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvz6s\" (UniqueName: \"kubernetes.io/projected/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-kube-api-access-lvz6s\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.589580 master-0 kubenswrapper[31456]: I0312 21:24:22.589570 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-etc-ovs\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.589828 master-0 kubenswrapper[31456]: I0312 21:24:22.589667 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxpw5\" (UniqueName: \"kubernetes.io/projected/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-kube-api-access-rxpw5\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.589895 master-0 kubenswrapper[31456]: I0312 21:24:22.589864 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-ovn-controller-tls-certs\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.589876 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-etc-ovs\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.589929 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-run-ovn\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590085 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-run-ovn\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590148 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-combined-ca-bundle\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590213 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-lib\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590245 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-run\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590365 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-scripts\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590479 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-var-log-ovn\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.590534 master-0 kubenswrapper[31456]: I0312 21:24:22.590542 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-var-run\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.591452 master-0 kubenswrapper[31456]: I0312 21:24:22.591429 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-scripts\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.593859 master-0 kubenswrapper[31456]: I0312 21:24:22.593182 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-scripts\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.595509 master-0 kubenswrapper[31456]: I0312 21:24:22.595283 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-combined-ca-bundle\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.596239 master-0 kubenswrapper[31456]: I0312 21:24:22.596178 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-ovn-controller-tls-certs\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.832322 master-0 kubenswrapper[31456]: I0312 21:24:22.832262 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxpw5\" (UniqueName: \"kubernetes.io/projected/87ceb3d2-d1d8-42f6-9867-b5450da8a9f7-kube-api-access-rxpw5\") pod \"ovn-controller-ovs-rdl65\" (UID: \"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7\") " pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:22.836926 master-0 kubenswrapper[31456]: I0312 21:24:22.836874 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvz6s\" (UniqueName: \"kubernetes.io/projected/2fb848ef-b2bf-429a-a01f-53240dc3bd0a-kube-api-access-lvz6s\") pod \"ovn-controller-b7rpf\" (UID: \"2fb848ef-b2bf-429a-a01f-53240dc3bd0a\") " pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:22.984135 master-0 kubenswrapper[31456]: I0312 21:24:22.983991 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:23.012887 master-0 kubenswrapper[31456]: I0312 21:24:23.012837 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:23.502748 master-0 kubenswrapper[31456]: I0312 21:24:23.502004 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 21:24:23.503840 master-0 kubenswrapper[31456]: I0312 21:24:23.503787 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.507352 master-0 kubenswrapper[31456]: I0312 21:24:23.506628 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 21:24:23.507352 master-0 kubenswrapper[31456]: I0312 21:24:23.506777 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 21:24:23.507352 master-0 kubenswrapper[31456]: I0312 21:24:23.506929 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 21:24:23.507352 master-0 kubenswrapper[31456]: I0312 21:24:23.507048 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 21:24:23.517064 master-0 kubenswrapper[31456]: I0312 21:24:23.516914 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 21:24:23.616081 master-0 kubenswrapper[31456]: I0312 21:24:23.616004 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565a1656-5522-446c-95c9-b5cf8218dfef-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616081 master-0 kubenswrapper[31456]: I0312 21:24:23.616075 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616354 master-0 kubenswrapper[31456]: I0312 21:24:23.616167 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9r4d\" (UniqueName: \"kubernetes.io/projected/565a1656-5522-446c-95c9-b5cf8218dfef-kube-api-access-z9r4d\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616354 master-0 kubenswrapper[31456]: I0312 21:24:23.616196 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616354 master-0 kubenswrapper[31456]: I0312 21:24:23.616257 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9ce9782f-2b42-4492-89e9-1fe61e3acf67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cd84f757-916b-4cec-a357-c1d902242cbd\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616354 master-0 kubenswrapper[31456]: I0312 21:24:23.616296 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565a1656-5522-446c-95c9-b5cf8218dfef-config\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616354 master-0 kubenswrapper[31456]: I0312 21:24:23.616325 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/565a1656-5522-446c-95c9-b5cf8218dfef-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.616354 master-0 kubenswrapper[31456]: I0312 21:24:23.616353 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.719095 master-0 kubenswrapper[31456]: I0312 21:24:23.719001 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9r4d\" (UniqueName: \"kubernetes.io/projected/565a1656-5522-446c-95c9-b5cf8218dfef-kube-api-access-z9r4d\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.719335 master-0 kubenswrapper[31456]: I0312 21:24:23.719157 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.719950 master-0 kubenswrapper[31456]: I0312 21:24:23.719396 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9ce9782f-2b42-4492-89e9-1fe61e3acf67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cd84f757-916b-4cec-a357-c1d902242cbd\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.719950 master-0 kubenswrapper[31456]: I0312 21:24:23.719436 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565a1656-5522-446c-95c9-b5cf8218dfef-config\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.720051 master-0 kubenswrapper[31456]: I0312 21:24:23.720012 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/565a1656-5522-446c-95c9-b5cf8218dfef-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.720273 master-0 kubenswrapper[31456]: I0312 21:24:23.720229 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.721074 master-0 kubenswrapper[31456]: I0312 21:24:23.720793 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565a1656-5522-446c-95c9-b5cf8218dfef-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.721257 master-0 kubenswrapper[31456]: I0312 21:24:23.721222 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.725436 master-0 kubenswrapper[31456]: I0312 21:24:23.722024 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/565a1656-5522-446c-95c9-b5cf8218dfef-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.725436 master-0 kubenswrapper[31456]: I0312 21:24:23.722762 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565a1656-5522-446c-95c9-b5cf8218dfef-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.725436 master-0 kubenswrapper[31456]: I0312 21:24:23.723848 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:23.725436 master-0 kubenswrapper[31456]: I0312 21:24:23.723887 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9ce9782f-2b42-4492-89e9-1fe61e3acf67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cd84f757-916b-4cec-a357-c1d902242cbd\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d678c4d6b82a6023f91503bc3ea18a8cecbe1d206493999280bb0a5479d50cb8/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.725436 master-0 kubenswrapper[31456]: I0312 21:24:23.724486 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.726255 master-0 kubenswrapper[31456]: I0312 21:24:23.726026 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.730459 master-0 kubenswrapper[31456]: I0312 21:24:23.728159 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/565a1656-5522-446c-95c9-b5cf8218dfef-config\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.731851 master-0 kubenswrapper[31456]: I0312 21:24:23.731759 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a1656-5522-446c-95c9-b5cf8218dfef-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:23.738664 master-0 kubenswrapper[31456]: I0312 21:24:23.738591 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9r4d\" (UniqueName: \"kubernetes.io/projected/565a1656-5522-446c-95c9-b5cf8218dfef-kube-api-access-z9r4d\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:25.163908 master-0 kubenswrapper[31456]: I0312 21:24:25.162547 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9ce9782f-2b42-4492-89e9-1fe61e3acf67\" (UniqueName: \"kubernetes.io/csi/topolvm.io^cd84f757-916b-4cec-a357-c1d902242cbd\") pod \"ovsdbserver-nb-0\" (UID: \"565a1656-5522-446c-95c9-b5cf8218dfef\") " pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:25.348080 master-0 kubenswrapper[31456]: I0312 21:24:25.347220 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:27.152183 master-0 kubenswrapper[31456]: I0312 21:24:27.151598 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 21:24:27.160340 master-0 kubenswrapper[31456]: I0312 21:24:27.160287 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.166164 master-0 kubenswrapper[31456]: I0312 21:24:27.166102 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 21:24:27.167864 master-0 kubenswrapper[31456]: I0312 21:24:27.167797 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 21:24:27.170905 master-0 kubenswrapper[31456]: I0312 21:24:27.170863 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 21:24:27.203138 master-0 kubenswrapper[31456]: I0312 21:24:27.203064 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 21:24:27.223688 master-0 kubenswrapper[31456]: I0312 21:24:27.223619 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b478fbf3-ea22-4c10-b254-6423457cc8dd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.224646 master-0 kubenswrapper[31456]: I0312 21:24:27.224604 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whkw9\" (UniqueName: \"kubernetes.io/projected/b478fbf3-ea22-4c10-b254-6423457cc8dd-kube-api-access-whkw9\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.224878 master-0 kubenswrapper[31456]: I0312 21:24:27.224859 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.225023 master-0 kubenswrapper[31456]: I0312 21:24:27.224937 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.225077 master-0 kubenswrapper[31456]: I0312 21:24:27.225040 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b478fbf3-ea22-4c10-b254-6423457cc8dd-config\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.225121 master-0 kubenswrapper[31456]: I0312 21:24:27.225081 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.225233 master-0 kubenswrapper[31456]: I0312 21:24:27.225190 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b478fbf3-ea22-4c10-b254-6423457cc8dd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.225546 master-0 kubenswrapper[31456]: I0312 21:24:27.225525 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41dbc1f4-0ad7-4af9-aa1c-02c666601a95\" (UniqueName: \"kubernetes.io/csi/topolvm.io^60371fe3-af44-4a58-a4a0-98fee5edefbe\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.327274 master-0 kubenswrapper[31456]: I0312 21:24:27.327224 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41dbc1f4-0ad7-4af9-aa1c-02c666601a95\" (UniqueName: \"kubernetes.io/csi/topolvm.io^60371fe3-af44-4a58-a4a0-98fee5edefbe\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.327595 master-0 kubenswrapper[31456]: I0312 21:24:27.327338 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b478fbf3-ea22-4c10-b254-6423457cc8dd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.327595 master-0 kubenswrapper[31456]: I0312 21:24:27.327376 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whkw9\" (UniqueName: \"kubernetes.io/projected/b478fbf3-ea22-4c10-b254-6423457cc8dd-kube-api-access-whkw9\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.327595 master-0 kubenswrapper[31456]: I0312 21:24:27.327426 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.328100 master-0 kubenswrapper[31456]: I0312 21:24:27.328059 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b478fbf3-ea22-4c10-b254-6423457cc8dd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.331586 master-0 kubenswrapper[31456]: I0312 21:24:27.327514 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.331980 master-0 kubenswrapper[31456]: I0312 21:24:27.331942 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b478fbf3-ea22-4c10-b254-6423457cc8dd-config\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.332175 master-0 kubenswrapper[31456]: I0312 21:24:27.332158 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.332311 master-0 kubenswrapper[31456]: I0312 21:24:27.332266 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:27.332401 master-0 kubenswrapper[31456]: I0312 21:24:27.332336 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41dbc1f4-0ad7-4af9-aa1c-02c666601a95\" (UniqueName: \"kubernetes.io/csi/topolvm.io^60371fe3-af44-4a58-a4a0-98fee5edefbe\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/08da45dcafac3dde238de2f051407f70e99ab71f804fa0f8cc1bd9cbed356b8f/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.332742 master-0 kubenswrapper[31456]: I0312 21:24:27.332692 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b478fbf3-ea22-4c10-b254-6423457cc8dd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.332856 master-0 kubenswrapper[31456]: I0312 21:24:27.332831 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.334080 master-0 kubenswrapper[31456]: I0312 21:24:27.334035 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b478fbf3-ea22-4c10-b254-6423457cc8dd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.334170 master-0 kubenswrapper[31456]: I0312 21:24:27.334133 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b478fbf3-ea22-4c10-b254-6423457cc8dd-config\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.338536 master-0 kubenswrapper[31456]: I0312 21:24:27.338517 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.346130 master-0 kubenswrapper[31456]: I0312 21:24:27.346084 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b478fbf3-ea22-4c10-b254-6423457cc8dd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:27.349872 master-0 kubenswrapper[31456]: I0312 21:24:27.349741 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whkw9\" (UniqueName: \"kubernetes.io/projected/b478fbf3-ea22-4c10-b254-6423457cc8dd-kube-api-access-whkw9\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:28.747992 master-0 kubenswrapper[31456]: I0312 21:24:28.747936 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41dbc1f4-0ad7-4af9-aa1c-02c666601a95\" (UniqueName: \"kubernetes.io/csi/topolvm.io^60371fe3-af44-4a58-a4a0-98fee5edefbe\") pod \"ovsdbserver-sb-0\" (UID: \"b478fbf3-ea22-4c10-b254-6423457cc8dd\") " pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:28.991278 master-0 kubenswrapper[31456]: I0312 21:24:28.991125 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:30.971440 master-0 kubenswrapper[31456]: I0312 21:24:30.971384 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" event={"ID":"3db71973-0c81-4806-b0f5-435f08829dcc","Type":"ContainerStarted","Data":"4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481"} Mar 12 21:24:31.072431 master-0 kubenswrapper[31456]: I0312 21:24:31.072379 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 21:24:31.197359 master-0 kubenswrapper[31456]: W0312 21:24:31.197285 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5837dd6c_30f0_4736_a8de_2ddb74041d5e.slice/crio-87379be5e07443f9d6f7ed4796f0aa7f41d2aa905d7ebb9c29aa49dd92ae413f WatchSource:0}: Error finding container 87379be5e07443f9d6f7ed4796f0aa7f41d2aa905d7ebb9c29aa49dd92ae413f: Status 404 returned error can't find the container with id 87379be5e07443f9d6f7ed4796f0aa7f41d2aa905d7ebb9c29aa49dd92ae413f Mar 12 21:24:31.484135 master-0 kubenswrapper[31456]: I0312 21:24:31.484079 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 21:24:31.488927 master-0 kubenswrapper[31456]: I0312 21:24:31.488557 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 21:24:31.492838 master-0 kubenswrapper[31456]: W0312 21:24:31.492798 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bd151b8_f0b5_4fbe_8ddb_7fd540c29cbc.slice/crio-1af4c0ce0fb01e7ec98a9f96e1aa7afaf83384c858017d40e401d41a5e86d172 WatchSource:0}: Error finding container 1af4c0ce0fb01e7ec98a9f96e1aa7afaf83384c858017d40e401d41a5e86d172: Status 404 returned error can't find the container with id 1af4c0ce0fb01e7ec98a9f96e1aa7afaf83384c858017d40e401d41a5e86d172 Mar 12 21:24:31.495654 master-0 kubenswrapper[31456]: W0312 21:24:31.495147 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c43c65e_4b3a_4a3c_b0bd_b3f3f858469d.slice/crio-27cc87531165c26b8be2971a955577b5d2c73bddafbaa63a52dc8a9806cee8fb WatchSource:0}: Error finding container 27cc87531165c26b8be2971a955577b5d2c73bddafbaa63a52dc8a9806cee8fb: Status 404 returned error can't find the container with id 27cc87531165c26b8be2971a955577b5d2c73bddafbaa63a52dc8a9806cee8fb Mar 12 21:24:31.503867 master-0 kubenswrapper[31456]: I0312 21:24:31.503786 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 21:24:31.721762 master-0 kubenswrapper[31456]: I0312 21:24:31.721718 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 21:24:31.734561 master-0 kubenswrapper[31456]: W0312 21:24:31.734520 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56559631_1206_49f6_8ebe_b5767087ef8e.slice/crio-9f5f8f007282e22cc700ba4063036eb75fd042040cdd79a6f5259ad600d423a4 WatchSource:0}: Error finding container 9f5f8f007282e22cc700ba4063036eb75fd042040cdd79a6f5259ad600d423a4: Status 404 returned error can't find the container with id 9f5f8f007282e22cc700ba4063036eb75fd042040cdd79a6f5259ad600d423a4 Mar 12 21:24:31.744987 master-0 kubenswrapper[31456]: I0312 21:24:31.744088 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7rpf"] Mar 12 21:24:31.913420 master-0 kubenswrapper[31456]: I0312 21:24:31.913344 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 21:24:31.985575 master-0 kubenswrapper[31456]: I0312 21:24:31.985436 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf" event={"ID":"2fb848ef-b2bf-429a-a01f-53240dc3bd0a","Type":"ContainerStarted","Data":"25808fb4e7a738c20885432480dbc44d08b1a90849a1fb2deea36abb447588ca"} Mar 12 21:24:31.989788 master-0 kubenswrapper[31456]: I0312 21:24:31.988656 31456 generic.go:334] "Generic (PLEG): container finished" podID="3db71973-0c81-4806-b0f5-435f08829dcc" containerID="4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481" exitCode=0 Mar 12 21:24:31.989788 master-0 kubenswrapper[31456]: I0312 21:24:31.988700 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" event={"ID":"3db71973-0c81-4806-b0f5-435f08829dcc","Type":"ContainerDied","Data":"4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481"} Mar 12 21:24:31.999391 master-0 kubenswrapper[31456]: I0312 21:24:31.999270 31456 generic.go:334] "Generic (PLEG): container finished" podID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerID="484d7b4b3c37c873cac6c6781c4200669d52678c41616f424b5e67549076da9d" exitCode=0 Mar 12 21:24:31.999391 master-0 kubenswrapper[31456]: I0312 21:24:31.999354 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" event={"ID":"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70","Type":"ContainerDied","Data":"484d7b4b3c37c873cac6c6781c4200669d52678c41616f424b5e67549076da9d"} Mar 12 21:24:32.001117 master-0 kubenswrapper[31456]: I0312 21:24:32.001002 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"56559631-1206-49f6-8ebe-b5767087ef8e","Type":"ContainerStarted","Data":"9f5f8f007282e22cc700ba4063036eb75fd042040cdd79a6f5259ad600d423a4"} Mar 12 21:24:32.003855 master-0 kubenswrapper[31456]: I0312 21:24:32.003632 31456 generic.go:334] "Generic (PLEG): container finished" podID="d282c2c6-09bd-4fa5-a4e2-0dd250332ade" containerID="b38f8dbf3b8c8787e4530b857cc8850bae209c9e37258252a42c44a2f849a697" exitCode=0 Mar 12 21:24:32.003855 master-0 kubenswrapper[31456]: I0312 21:24:32.003677 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" event={"ID":"d282c2c6-09bd-4fa5-a4e2-0dd250332ade","Type":"ContainerDied","Data":"b38f8dbf3b8c8787e4530b857cc8850bae209c9e37258252a42c44a2f849a697"} Mar 12 21:24:32.023114 master-0 kubenswrapper[31456]: I0312 21:24:32.023010 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8e067175-5771-473f-85a8-af63a27ee30a","Type":"ContainerStarted","Data":"efa151af1851c457f6cd6acb3c98094e9b150555c06f6c96467ab0d79945f74c"} Mar 12 21:24:32.039111 master-0 kubenswrapper[31456]: I0312 21:24:32.037335 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5837dd6c-30f0-4736-a8de-2ddb74041d5e","Type":"ContainerStarted","Data":"87379be5e07443f9d6f7ed4796f0aa7f41d2aa905d7ebb9c29aa49dd92ae413f"} Mar 12 21:24:32.041685 master-0 kubenswrapper[31456]: I0312 21:24:32.041641 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc","Type":"ContainerStarted","Data":"1af4c0ce0fb01e7ec98a9f96e1aa7afaf83384c858017d40e401d41a5e86d172"} Mar 12 21:24:32.050681 master-0 kubenswrapper[31456]: I0312 21:24:32.050619 31456 generic.go:334] "Generic (PLEG): container finished" podID="d04e1418-b358-485d-9a03-ed37d0f15d96" containerID="11d8c2f0a32910b544ca1fe01b5c3e74df83b0d5a67985a2f7315dde61a17673" exitCode=0 Mar 12 21:24:32.050986 master-0 kubenswrapper[31456]: I0312 21:24:32.050726 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" event={"ID":"d04e1418-b358-485d-9a03-ed37d0f15d96","Type":"ContainerDied","Data":"11d8c2f0a32910b544ca1fe01b5c3e74df83b0d5a67985a2f7315dde61a17673"} Mar 12 21:24:32.054967 master-0 kubenswrapper[31456]: I0312 21:24:32.054892 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"565a1656-5522-446c-95c9-b5cf8218dfef","Type":"ContainerStarted","Data":"2cb72ef25c5bdb4b14f7d7e2a457f39a73b4572b35bc340bb4fdd35933be7484"} Mar 12 21:24:32.058464 master-0 kubenswrapper[31456]: I0312 21:24:32.058427 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d","Type":"ContainerStarted","Data":"27cc87531165c26b8be2971a955577b5d2c73bddafbaa63a52dc8a9806cee8fb"} Mar 12 21:24:32.616053 master-0 kubenswrapper[31456]: I0312 21:24:32.615184 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rdl65"] Mar 12 21:24:32.772247 master-0 kubenswrapper[31456]: I0312 21:24:32.772186 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:32.780694 master-0 kubenswrapper[31456]: I0312 21:24:32.780572 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:32.882354 master-0 kubenswrapper[31456]: I0312 21:24:32.882120 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 21:24:32.886513 master-0 kubenswrapper[31456]: I0312 21:24:32.886476 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmzk4\" (UniqueName: \"kubernetes.io/projected/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-kube-api-access-tmzk4\") pod \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " Mar 12 21:24:32.886830 master-0 kubenswrapper[31456]: I0312 21:24:32.886815 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-dns-svc\") pod \"d04e1418-b358-485d-9a03-ed37d0f15d96\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " Mar 12 21:24:32.886963 master-0 kubenswrapper[31456]: I0312 21:24:32.886950 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-config\") pod \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\" (UID: \"d282c2c6-09bd-4fa5-a4e2-0dd250332ade\") " Mar 12 21:24:32.887071 master-0 kubenswrapper[31456]: I0312 21:24:32.887060 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-config\") pod \"d04e1418-b358-485d-9a03-ed37d0f15d96\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " Mar 12 21:24:32.887355 master-0 kubenswrapper[31456]: I0312 21:24:32.887340 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8drfs\" (UniqueName: \"kubernetes.io/projected/d04e1418-b358-485d-9a03-ed37d0f15d96-kube-api-access-8drfs\") pod \"d04e1418-b358-485d-9a03-ed37d0f15d96\" (UID: \"d04e1418-b358-485d-9a03-ed37d0f15d96\") " Mar 12 21:24:32.908392 master-0 kubenswrapper[31456]: I0312 21:24:32.908206 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04e1418-b358-485d-9a03-ed37d0f15d96-kube-api-access-8drfs" (OuterVolumeSpecName: "kube-api-access-8drfs") pod "d04e1418-b358-485d-9a03-ed37d0f15d96" (UID: "d04e1418-b358-485d-9a03-ed37d0f15d96"). InnerVolumeSpecName "kube-api-access-8drfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:24:32.908392 master-0 kubenswrapper[31456]: I0312 21:24:32.908326 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-kube-api-access-tmzk4" (OuterVolumeSpecName: "kube-api-access-tmzk4") pod "d282c2c6-09bd-4fa5-a4e2-0dd250332ade" (UID: "d282c2c6-09bd-4fa5-a4e2-0dd250332ade"). InnerVolumeSpecName "kube-api-access-tmzk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:24:32.909127 master-0 kubenswrapper[31456]: I0312 21:24:32.909079 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d04e1418-b358-485d-9a03-ed37d0f15d96" (UID: "d04e1418-b358-485d-9a03-ed37d0f15d96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:32.924902 master-0 kubenswrapper[31456]: I0312 21:24:32.924860 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-config" (OuterVolumeSpecName: "config") pod "d282c2c6-09bd-4fa5-a4e2-0dd250332ade" (UID: "d282c2c6-09bd-4fa5-a4e2-0dd250332ade"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:32.931599 master-0 kubenswrapper[31456]: I0312 21:24:32.931543 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-config" (OuterVolumeSpecName: "config") pod "d04e1418-b358-485d-9a03-ed37d0f15d96" (UID: "d04e1418-b358-485d-9a03-ed37d0f15d96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:32.989512 master-0 kubenswrapper[31456]: I0312 21:24:32.989454 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:32.989512 master-0 kubenswrapper[31456]: I0312 21:24:32.989500 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:32.989512 master-0 kubenswrapper[31456]: I0312 21:24:32.989531 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04e1418-b358-485d-9a03-ed37d0f15d96-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:32.989512 master-0 kubenswrapper[31456]: I0312 21:24:32.989543 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8drfs\" (UniqueName: \"kubernetes.io/projected/d04e1418-b358-485d-9a03-ed37d0f15d96-kube-api-access-8drfs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:32.989512 master-0 kubenswrapper[31456]: I0312 21:24:32.989554 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmzk4\" (UniqueName: \"kubernetes.io/projected/d282c2c6-09bd-4fa5-a4e2-0dd250332ade-kube-api-access-tmzk4\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:33.072595 master-0 kubenswrapper[31456]: I0312 21:24:33.072537 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdl65" event={"ID":"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7","Type":"ContainerStarted","Data":"0cc06108e6faf7b8ef459a6190da615bd95cbc261148f2863f00b8c126a69046"} Mar 12 21:24:33.076923 master-0 kubenswrapper[31456]: I0312 21:24:33.076867 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" event={"ID":"3db71973-0c81-4806-b0f5-435f08829dcc","Type":"ContainerStarted","Data":"7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5"} Mar 12 21:24:33.077870 master-0 kubenswrapper[31456]: I0312 21:24:33.077681 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:33.083340 master-0 kubenswrapper[31456]: I0312 21:24:33.083303 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" event={"ID":"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70","Type":"ContainerStarted","Data":"8d568c73550fd753b9852eb8b9af7a83bf61470823e5d8f67643a9a57bd482d1"} Mar 12 21:24:33.083533 master-0 kubenswrapper[31456]: I0312 21:24:33.083511 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:33.091608 master-0 kubenswrapper[31456]: I0312 21:24:33.091531 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" event={"ID":"d04e1418-b358-485d-9a03-ed37d0f15d96","Type":"ContainerDied","Data":"eca9d83043cb77e624bcffcf8479a2ef45621efcdda3bed02738a902a1a133cd"} Mar 12 21:24:33.093497 master-0 kubenswrapper[31456]: I0312 21:24:33.091849 31456 scope.go:117] "RemoveContainer" containerID="11d8c2f0a32910b544ca1fe01b5c3e74df83b0d5a67985a2f7315dde61a17673" Mar 12 21:24:33.093497 master-0 kubenswrapper[31456]: I0312 21:24:33.091984 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-69h5n" Mar 12 21:24:33.098575 master-0 kubenswrapper[31456]: I0312 21:24:33.098508 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" event={"ID":"d282c2c6-09bd-4fa5-a4e2-0dd250332ade","Type":"ContainerDied","Data":"2f80e98b6599157f9792a50a92dae11e8324b1de1d5043176efe65cb588b83f6"} Mar 12 21:24:33.098665 master-0 kubenswrapper[31456]: I0312 21:24:33.098582 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-gvpkv" Mar 12 21:24:33.102867 master-0 kubenswrapper[31456]: I0312 21:24:33.102222 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b478fbf3-ea22-4c10-b254-6423457cc8dd","Type":"ContainerStarted","Data":"b716a8885885358e63ef63fa91bc76906da4d642ef189a22e9fdd7a6b4fe3dad"} Mar 12 21:24:33.120103 master-0 kubenswrapper[31456]: I0312 21:24:33.117876 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" podStartSLOduration=4.215001563 podStartE2EDuration="22.117852132s" podCreationTimestamp="2026-03-12 21:24:11 +0000 UTC" firstStartedPulling="2026-03-12 21:24:12.769007543 +0000 UTC m=+913.843612871" lastFinishedPulling="2026-03-12 21:24:30.671858112 +0000 UTC m=+931.746463440" observedRunningTime="2026-03-12 21:24:33.100728988 +0000 UTC m=+934.175334316" watchObservedRunningTime="2026-03-12 21:24:33.117852132 +0000 UTC m=+934.192457460" Mar 12 21:24:33.123509 master-0 kubenswrapper[31456]: I0312 21:24:33.121495 31456 scope.go:117] "RemoveContainer" containerID="b38f8dbf3b8c8787e4530b857cc8850bae209c9e37258252a42c44a2f849a697" Mar 12 21:24:33.148627 master-0 kubenswrapper[31456]: I0312 21:24:33.147559 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" podStartSLOduration=3.827962693 podStartE2EDuration="21.14753643s" podCreationTimestamp="2026-03-12 21:24:12 +0000 UTC" firstStartedPulling="2026-03-12 21:24:13.31941411 +0000 UTC m=+914.394019438" lastFinishedPulling="2026-03-12 21:24:30.638987847 +0000 UTC m=+931.713593175" observedRunningTime="2026-03-12 21:24:33.137384475 +0000 UTC m=+934.211989823" watchObservedRunningTime="2026-03-12 21:24:33.14753643 +0000 UTC m=+934.222141758" Mar 12 21:24:33.220608 master-0 kubenswrapper[31456]: I0312 21:24:33.220522 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-69h5n"] Mar 12 21:24:33.227847 master-0 kubenswrapper[31456]: I0312 21:24:33.223115 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-69h5n"] Mar 12 21:24:33.283689 master-0 kubenswrapper[31456]: I0312 21:24:33.282228 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-gvpkv"] Mar 12 21:24:33.307549 master-0 kubenswrapper[31456]: I0312 21:24:33.307494 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-gvpkv"] Mar 12 21:24:35.183458 master-0 kubenswrapper[31456]: I0312 21:24:35.183411 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04e1418-b358-485d-9a03-ed37d0f15d96" path="/var/lib/kubelet/pods/d04e1418-b358-485d-9a03-ed37d0f15d96/volumes" Mar 12 21:24:35.184766 master-0 kubenswrapper[31456]: I0312 21:24:35.184748 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d282c2c6-09bd-4fa5-a4e2-0dd250332ade" path="/var/lib/kubelet/pods/d282c2c6-09bd-4fa5-a4e2-0dd250332ade/volumes" Mar 12 21:24:37.061606 master-0 kubenswrapper[31456]: I0312 21:24:37.061506 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:37.797136 master-0 kubenswrapper[31456]: I0312 21:24:37.797024 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:24:37.865209 master-0 kubenswrapper[31456]: I0312 21:24:37.863457 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-9gbwl"] Mar 12 21:24:37.865209 master-0 kubenswrapper[31456]: I0312 21:24:37.863687 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" containerName="dnsmasq-dns" containerID="cri-o://7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5" gracePeriod=10 Mar 12 21:24:41.183286 master-0 kubenswrapper[31456]: I0312 21:24:41.183154 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:41.248578 master-0 kubenswrapper[31456]: I0312 21:24:41.248508 31456 generic.go:334] "Generic (PLEG): container finished" podID="3db71973-0c81-4806-b0f5-435f08829dcc" containerID="7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5" exitCode=0 Mar 12 21:24:41.248758 master-0 kubenswrapper[31456]: I0312 21:24:41.248593 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" event={"ID":"3db71973-0c81-4806-b0f5-435f08829dcc","Type":"ContainerDied","Data":"7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5"} Mar 12 21:24:41.248758 master-0 kubenswrapper[31456]: I0312 21:24:41.248641 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" event={"ID":"3db71973-0c81-4806-b0f5-435f08829dcc","Type":"ContainerDied","Data":"7ce94fb33ccb49a46e3298493b35fe3f090add45fc471287ba4778204dc70d5f"} Mar 12 21:24:41.248758 master-0 kubenswrapper[31456]: I0312 21:24:41.248670 31456 scope.go:117] "RemoveContainer" containerID="7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5" Mar 12 21:24:41.249041 master-0 kubenswrapper[31456]: I0312 21:24:41.249016 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586dbdbb8c-9gbwl" Mar 12 21:24:41.328511 master-0 kubenswrapper[31456]: I0312 21:24:41.325003 31456 scope.go:117] "RemoveContainer" containerID="4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481" Mar 12 21:24:41.328511 master-0 kubenswrapper[31456]: I0312 21:24:41.328067 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-config\") pod \"3db71973-0c81-4806-b0f5-435f08829dcc\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " Mar 12 21:24:41.328511 master-0 kubenswrapper[31456]: I0312 21:24:41.328247 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf49k\" (UniqueName: \"kubernetes.io/projected/3db71973-0c81-4806-b0f5-435f08829dcc-kube-api-access-cf49k\") pod \"3db71973-0c81-4806-b0f5-435f08829dcc\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " Mar 12 21:24:41.328511 master-0 kubenswrapper[31456]: I0312 21:24:41.328381 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-dns-svc\") pod \"3db71973-0c81-4806-b0f5-435f08829dcc\" (UID: \"3db71973-0c81-4806-b0f5-435f08829dcc\") " Mar 12 21:24:41.343754 master-0 kubenswrapper[31456]: I0312 21:24:41.343674 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db71973-0c81-4806-b0f5-435f08829dcc-kube-api-access-cf49k" (OuterVolumeSpecName: "kube-api-access-cf49k") pod "3db71973-0c81-4806-b0f5-435f08829dcc" (UID: "3db71973-0c81-4806-b0f5-435f08829dcc"). InnerVolumeSpecName "kube-api-access-cf49k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:24:41.438152 master-0 kubenswrapper[31456]: I0312 21:24:41.438104 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf49k\" (UniqueName: \"kubernetes.io/projected/3db71973-0c81-4806-b0f5-435f08829dcc-kube-api-access-cf49k\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:41.462133 master-0 kubenswrapper[31456]: I0312 21:24:41.462078 31456 scope.go:117] "RemoveContainer" containerID="7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5" Mar 12 21:24:41.463714 master-0 kubenswrapper[31456]: E0312 21:24:41.463640 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5\": container with ID starting with 7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5 not found: ID does not exist" containerID="7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5" Mar 12 21:24:41.463783 master-0 kubenswrapper[31456]: I0312 21:24:41.463731 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5"} err="failed to get container status \"7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5\": rpc error: code = NotFound desc = could not find container \"7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5\": container with ID starting with 7cbd5d02d59e05063a3b6ebe2c5e04ca5fd3696004ec9728231bf0e5a332b4c5 not found: ID does not exist" Mar 12 21:24:41.463856 master-0 kubenswrapper[31456]: I0312 21:24:41.463778 31456 scope.go:117] "RemoveContainer" containerID="4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481" Mar 12 21:24:41.464648 master-0 kubenswrapper[31456]: E0312 21:24:41.464599 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481\": container with ID starting with 4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481 not found: ID does not exist" containerID="4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481" Mar 12 21:24:41.464736 master-0 kubenswrapper[31456]: I0312 21:24:41.464689 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481"} err="failed to get container status \"4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481\": rpc error: code = NotFound desc = could not find container \"4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481\": container with ID starting with 4d5641686d8e24125fff14150de2d042e3e8b57fa6de8e6dc1fdf7bb25fb3481 not found: ID does not exist" Mar 12 21:24:41.529197 master-0 kubenswrapper[31456]: I0312 21:24:41.528138 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-config" (OuterVolumeSpecName: "config") pod "3db71973-0c81-4806-b0f5-435f08829dcc" (UID: "3db71973-0c81-4806-b0f5-435f08829dcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:41.531723 master-0 kubenswrapper[31456]: I0312 21:24:41.531611 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3db71973-0c81-4806-b0f5-435f08829dcc" (UID: "3db71973-0c81-4806-b0f5-435f08829dcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:41.539682 master-0 kubenswrapper[31456]: I0312 21:24:41.539627 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:41.539682 master-0 kubenswrapper[31456]: I0312 21:24:41.539679 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3db71973-0c81-4806-b0f5-435f08829dcc-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:41.625577 master-0 kubenswrapper[31456]: I0312 21:24:41.625543 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-9gbwl"] Mar 12 21:24:41.641335 master-0 kubenswrapper[31456]: I0312 21:24:41.639724 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586dbdbb8c-9gbwl"] Mar 12 21:24:42.277450 master-0 kubenswrapper[31456]: I0312 21:24:42.277244 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"56559631-1206-49f6-8ebe-b5767087ef8e","Type":"ContainerStarted","Data":"764ccfaf0d427b7048fc2c8528dbc01f8017be0a950314b5c525998812213d76"} Mar 12 21:24:42.280341 master-0 kubenswrapper[31456]: I0312 21:24:42.278480 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 21:24:42.293427 master-0 kubenswrapper[31456]: I0312 21:24:42.293337 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"565a1656-5522-446c-95c9-b5cf8218dfef","Type":"ContainerStarted","Data":"66ea827a54d41a6f631abec659a450342fac0af1442787131a6a4c21dca50c53"} Mar 12 21:24:42.299327 master-0 kubenswrapper[31456]: I0312 21:24:42.297035 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5837dd6c-30f0-4736-a8de-2ddb74041d5e","Type":"ContainerStarted","Data":"1e4e8e0b0773b433c3ab064133499e7bbcf01b030320abe3f2692d4724ba573f"} Mar 12 21:24:42.312893 master-0 kubenswrapper[31456]: I0312 21:24:42.311730 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d","Type":"ContainerStarted","Data":"8c56ab034e30af7548994f0794502fec3ec841bbf2ffe950f697c6ee87bd706e"} Mar 12 21:24:42.323905 master-0 kubenswrapper[31456]: I0312 21:24:42.319787 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf" event={"ID":"2fb848ef-b2bf-429a-a01f-53240dc3bd0a","Type":"ContainerStarted","Data":"2d6d610ca8ccd809ab5be23605c54a580f785acd43062c669c625ab789648095"} Mar 12 21:24:42.323905 master-0 kubenswrapper[31456]: I0312 21:24:42.319986 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-b7rpf" Mar 12 21:24:42.324134 master-0 kubenswrapper[31456]: I0312 21:24:42.324000 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b478fbf3-ea22-4c10-b254-6423457cc8dd","Type":"ContainerStarted","Data":"4f24f657696bde6911534e78b899e63199cd8d00c1ed6d3d890f2bbe59fa86d8"} Mar 12 21:24:42.328018 master-0 kubenswrapper[31456]: I0312 21:24:42.327776 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdl65" event={"ID":"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7","Type":"ContainerStarted","Data":"c3df717242ce3cee19c464dea6acae24251e739cf8149a8cef668c143c2d0865"} Mar 12 21:24:42.339407 master-0 kubenswrapper[31456]: I0312 21:24:42.339248 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.155742399 podStartE2EDuration="26.339226637s" podCreationTimestamp="2026-03-12 21:24:16 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.754343471 +0000 UTC m=+932.828948799" lastFinishedPulling="2026-03-12 21:24:40.937827709 +0000 UTC m=+942.012433037" observedRunningTime="2026-03-12 21:24:42.30751364 +0000 UTC m=+943.382118968" watchObservedRunningTime="2026-03-12 21:24:42.339226637 +0000 UTC m=+943.413831965" Mar 12 21:24:42.392574 master-0 kubenswrapper[31456]: I0312 21:24:42.392438 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-b7rpf" podStartSLOduration=11.133939252 podStartE2EDuration="20.392420034s" podCreationTimestamp="2026-03-12 21:24:22 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.757754224 +0000 UTC m=+932.832359552" lastFinishedPulling="2026-03-12 21:24:41.016235006 +0000 UTC m=+942.090840334" observedRunningTime="2026-03-12 21:24:42.391883472 +0000 UTC m=+943.466488800" watchObservedRunningTime="2026-03-12 21:24:42.392420034 +0000 UTC m=+943.467025372" Mar 12 21:24:43.187209 master-0 kubenswrapper[31456]: I0312 21:24:43.187076 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" path="/var/lib/kubelet/pods/3db71973-0c81-4806-b0f5-435f08829dcc/volumes" Mar 12 21:24:43.343368 master-0 kubenswrapper[31456]: I0312 21:24:43.343315 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc","Type":"ContainerStarted","Data":"138616615e61013d25931cad9e2a90c68377bb0c69c117792e8205ee9678e246"} Mar 12 21:24:43.346960 master-0 kubenswrapper[31456]: I0312 21:24:43.346922 31456 generic.go:334] "Generic (PLEG): container finished" podID="87ceb3d2-d1d8-42f6-9867-b5450da8a9f7" containerID="c3df717242ce3cee19c464dea6acae24251e739cf8149a8cef668c143c2d0865" exitCode=0 Mar 12 21:24:43.347114 master-0 kubenswrapper[31456]: I0312 21:24:43.347093 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdl65" event={"ID":"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7","Type":"ContainerDied","Data":"c3df717242ce3cee19c464dea6acae24251e739cf8149a8cef668c143c2d0865"} Mar 12 21:24:43.352727 master-0 kubenswrapper[31456]: I0312 21:24:43.352704 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8e067175-5771-473f-85a8-af63a27ee30a","Type":"ContainerStarted","Data":"9d805d9cfa171ac267ac91c92953f65d67a09b02c36ab5bd6e12b268be8b9570"} Mar 12 21:24:44.368869 master-0 kubenswrapper[31456]: I0312 21:24:44.368796 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdl65" event={"ID":"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7","Type":"ContainerStarted","Data":"8ff20d3339593ceec6ab4b9ca7924963e4d336caeb2d51f757c9d5d10d7b1675"} Mar 12 21:24:46.195862 master-0 kubenswrapper[31456]: I0312 21:24:46.195765 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ggf7j"] Mar 12 21:24:46.196594 master-0 kubenswrapper[31456]: E0312 21:24:46.196518 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" containerName="init" Mar 12 21:24:46.196594 master-0 kubenswrapper[31456]: I0312 21:24:46.196534 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" containerName="init" Mar 12 21:24:46.196594 master-0 kubenswrapper[31456]: E0312 21:24:46.196552 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04e1418-b358-485d-9a03-ed37d0f15d96" containerName="init" Mar 12 21:24:46.196774 master-0 kubenswrapper[31456]: I0312 21:24:46.196609 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04e1418-b358-485d-9a03-ed37d0f15d96" containerName="init" Mar 12 21:24:46.196774 master-0 kubenswrapper[31456]: E0312 21:24:46.196625 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d282c2c6-09bd-4fa5-a4e2-0dd250332ade" containerName="init" Mar 12 21:24:46.196774 master-0 kubenswrapper[31456]: I0312 21:24:46.196632 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d282c2c6-09bd-4fa5-a4e2-0dd250332ade" containerName="init" Mar 12 21:24:46.196774 master-0 kubenswrapper[31456]: E0312 21:24:46.196648 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" containerName="dnsmasq-dns" Mar 12 21:24:46.196774 master-0 kubenswrapper[31456]: I0312 21:24:46.196654 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" containerName="dnsmasq-dns" Mar 12 21:24:46.198250 master-0 kubenswrapper[31456]: I0312 21:24:46.198227 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db71973-0c81-4806-b0f5-435f08829dcc" containerName="dnsmasq-dns" Mar 12 21:24:46.198341 master-0 kubenswrapper[31456]: I0312 21:24:46.198252 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d282c2c6-09bd-4fa5-a4e2-0dd250332ade" containerName="init" Mar 12 21:24:46.198341 master-0 kubenswrapper[31456]: I0312 21:24:46.198288 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04e1418-b358-485d-9a03-ed37d0f15d96" containerName="init" Mar 12 21:24:46.199018 master-0 kubenswrapper[31456]: I0312 21:24:46.198999 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.202281 master-0 kubenswrapper[31456]: I0312 21:24:46.202140 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 21:24:46.251461 master-0 kubenswrapper[31456]: I0312 21:24:46.251034 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwf4\" (UniqueName: \"kubernetes.io/projected/4c2247af-3efc-43dd-b06b-4ee98d3073c4-kube-api-access-stwf4\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.251461 master-0 kubenswrapper[31456]: I0312 21:24:46.251166 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2247af-3efc-43dd-b06b-4ee98d3073c4-config\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.251461 master-0 kubenswrapper[31456]: I0312 21:24:46.251274 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2247af-3efc-43dd-b06b-4ee98d3073c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.251461 master-0 kubenswrapper[31456]: I0312 21:24:46.251299 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2247af-3efc-43dd-b06b-4ee98d3073c4-combined-ca-bundle\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.251868 master-0 kubenswrapper[31456]: I0312 21:24:46.251527 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4c2247af-3efc-43dd-b06b-4ee98d3073c4-ovs-rundir\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.251868 master-0 kubenswrapper[31456]: I0312 21:24:46.251561 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4c2247af-3efc-43dd-b06b-4ee98d3073c4-ovn-rundir\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.315954 master-0 kubenswrapper[31456]: I0312 21:24:46.315800 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ggf7j"] Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.354572 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2247af-3efc-43dd-b06b-4ee98d3073c4-config\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.354702 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2247af-3efc-43dd-b06b-4ee98d3073c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.354732 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2247af-3efc-43dd-b06b-4ee98d3073c4-combined-ca-bundle\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.354848 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4c2247af-3efc-43dd-b06b-4ee98d3073c4-ovs-rundir\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.355051 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4c2247af-3efc-43dd-b06b-4ee98d3073c4-ovn-rundir\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.355120 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stwf4\" (UniqueName: \"kubernetes.io/projected/4c2247af-3efc-43dd-b06b-4ee98d3073c4-kube-api-access-stwf4\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.355126 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4c2247af-3efc-43dd-b06b-4ee98d3073c4-ovs-rundir\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.355796 master-0 kubenswrapper[31456]: I0312 21:24:46.355320 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4c2247af-3efc-43dd-b06b-4ee98d3073c4-ovn-rundir\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.358064 master-0 kubenswrapper[31456]: I0312 21:24:46.358013 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2247af-3efc-43dd-b06b-4ee98d3073c4-config\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.359242 master-0 kubenswrapper[31456]: I0312 21:24:46.359180 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2247af-3efc-43dd-b06b-4ee98d3073c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.362207 master-0 kubenswrapper[31456]: I0312 21:24:46.362170 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2247af-3efc-43dd-b06b-4ee98d3073c4-combined-ca-bundle\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.401342 master-0 kubenswrapper[31456]: I0312 21:24:46.401268 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rdl65" event={"ID":"87ceb3d2-d1d8-42f6-9867-b5450da8a9f7","Type":"ContainerStarted","Data":"10e3e9f47aa45b4d97d19d474bfead036a19150e0cc8be8f3ec86fd2fd5aa4b4"} Mar 12 21:24:46.402672 master-0 kubenswrapper[31456]: I0312 21:24:46.402633 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:46.402672 master-0 kubenswrapper[31456]: I0312 21:24:46.402665 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:24:46.466980 master-0 kubenswrapper[31456]: I0312 21:24:46.466917 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stwf4\" (UniqueName: \"kubernetes.io/projected/4c2247af-3efc-43dd-b06b-4ee98d3073c4-kube-api-access-stwf4\") pod \"ovn-controller-metrics-ggf7j\" (UID: \"4c2247af-3efc-43dd-b06b-4ee98d3073c4\") " pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.519198 master-0 kubenswrapper[31456]: I0312 21:24:46.519061 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ggf7j" Mar 12 21:24:46.580184 master-0 kubenswrapper[31456]: I0312 21:24:46.578425 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rdl65" podStartSLOduration=16.188400846 podStartE2EDuration="24.578400618s" podCreationTimestamp="2026-03-12 21:24:22 +0000 UTC" firstStartedPulling="2026-03-12 21:24:32.626190173 +0000 UTC m=+933.700795501" lastFinishedPulling="2026-03-12 21:24:41.016189945 +0000 UTC m=+942.090795273" observedRunningTime="2026-03-12 21:24:46.562933663 +0000 UTC m=+947.637538991" watchObservedRunningTime="2026-03-12 21:24:46.578400618 +0000 UTC m=+947.653005966" Mar 12 21:24:46.897618 master-0 kubenswrapper[31456]: I0312 21:24:46.897569 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 21:24:47.313224 master-0 kubenswrapper[31456]: I0312 21:24:47.312629 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-9444l"] Mar 12 21:24:47.314776 master-0 kubenswrapper[31456]: I0312 21:24:47.314750 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.318939 master-0 kubenswrapper[31456]: I0312 21:24:47.318304 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 21:24:47.413719 master-0 kubenswrapper[31456]: I0312 21:24:47.413617 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-9444l"] Mar 12 21:24:47.459078 master-0 kubenswrapper[31456]: I0312 21:24:47.459027 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbvk\" (UniqueName: \"kubernetes.io/projected/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-kube-api-access-9rbvk\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.459309 master-0 kubenswrapper[31456]: I0312 21:24:47.459151 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-config\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.459309 master-0 kubenswrapper[31456]: I0312 21:24:47.459230 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-dns-svc\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.459309 master-0 kubenswrapper[31456]: I0312 21:24:47.459290 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-ovsdbserver-nb\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.568971 master-0 kubenswrapper[31456]: I0312 21:24:47.566415 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-dns-svc\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.568971 master-0 kubenswrapper[31456]: I0312 21:24:47.566505 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-ovsdbserver-nb\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.568971 master-0 kubenswrapper[31456]: I0312 21:24:47.566646 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbvk\" (UniqueName: \"kubernetes.io/projected/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-kube-api-access-9rbvk\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.568971 master-0 kubenswrapper[31456]: I0312 21:24:47.567530 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-dns-svc\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.568971 master-0 kubenswrapper[31456]: I0312 21:24:47.567755 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-config\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.568971 master-0 kubenswrapper[31456]: I0312 21:24:47.568610 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-config\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.569529 master-0 kubenswrapper[31456]: I0312 21:24:47.569466 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-ovsdbserver-nb\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.878947 master-0 kubenswrapper[31456]: I0312 21:24:47.878839 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbvk\" (UniqueName: \"kubernetes.io/projected/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-kube-api-access-9rbvk\") pod \"dnsmasq-dns-79d6ccc4b7-9444l\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:47.950878 master-0 kubenswrapper[31456]: I0312 21:24:47.950730 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:48.809944 master-0 kubenswrapper[31456]: I0312 21:24:48.808687 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-9444l"] Mar 12 21:24:49.278778 master-0 kubenswrapper[31456]: I0312 21:24:49.278701 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f498f559-4zjpr"] Mar 12 21:24:49.281757 master-0 kubenswrapper[31456]: I0312 21:24:49.281338 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.288245 master-0 kubenswrapper[31456]: I0312 21:24:49.288189 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 21:24:49.407984 master-0 kubenswrapper[31456]: I0312 21:24:49.406819 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.407984 master-0 kubenswrapper[31456]: I0312 21:24:49.406895 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-config\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.407984 master-0 kubenswrapper[31456]: I0312 21:24:49.407057 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.407984 master-0 kubenswrapper[31456]: I0312 21:24:49.407186 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcvj\" (UniqueName: \"kubernetes.io/projected/b58811ef-40fc-4ced-a940-d236f5ef5677-kube-api-access-lfcvj\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.407984 master-0 kubenswrapper[31456]: I0312 21:24:49.407206 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-dns-svc\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.468406 master-0 kubenswrapper[31456]: I0312 21:24:49.468344 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-4zjpr"] Mar 12 21:24:49.485852 master-0 kubenswrapper[31456]: I0312 21:24:49.483334 31456 generic.go:334] "Generic (PLEG): container finished" podID="4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d" containerID="8c56ab034e30af7548994f0794502fec3ec841bbf2ffe950f697c6ee87bd706e" exitCode=0 Mar 12 21:24:49.485852 master-0 kubenswrapper[31456]: I0312 21:24:49.483401 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d","Type":"ContainerDied","Data":"8c56ab034e30af7548994f0794502fec3ec841bbf2ffe950f697c6ee87bd706e"} Mar 12 21:24:49.491569 master-0 kubenswrapper[31456]: I0312 21:24:49.491464 31456 generic.go:334] "Generic (PLEG): container finished" podID="5837dd6c-30f0-4736-a8de-2ddb74041d5e" containerID="1e4e8e0b0773b433c3ab064133499e7bbcf01b030320abe3f2692d4724ba573f" exitCode=0 Mar 12 21:24:49.491569 master-0 kubenswrapper[31456]: I0312 21:24:49.491508 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5837dd6c-30f0-4736-a8de-2ddb74041d5e","Type":"ContainerDied","Data":"1e4e8e0b0773b433c3ab064133499e7bbcf01b030320abe3f2692d4724ba573f"} Mar 12 21:24:49.510228 master-0 kubenswrapper[31456]: I0312 21:24:49.510150 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.510410 master-0 kubenswrapper[31456]: I0312 21:24:49.510291 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcvj\" (UniqueName: \"kubernetes.io/projected/b58811ef-40fc-4ced-a940-d236f5ef5677-kube-api-access-lfcvj\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.510410 master-0 kubenswrapper[31456]: I0312 21:24:49.510324 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-dns-svc\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.510410 master-0 kubenswrapper[31456]: I0312 21:24:49.510390 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.510518 master-0 kubenswrapper[31456]: I0312 21:24:49.510424 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-config\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.511676 master-0 kubenswrapper[31456]: I0312 21:24:49.511632 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-config\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.512223 master-0 kubenswrapper[31456]: I0312 21:24:49.512188 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.512513 master-0 kubenswrapper[31456]: I0312 21:24:49.512475 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-dns-svc\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.515924 master-0 kubenswrapper[31456]: I0312 21:24:49.515715 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.664578 master-0 kubenswrapper[31456]: I0312 21:24:49.664525 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcvj\" (UniqueName: \"kubernetes.io/projected/b58811ef-40fc-4ced-a940-d236f5ef5677-kube-api-access-lfcvj\") pod \"dnsmasq-dns-76f498f559-4zjpr\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:49.953732 master-0 kubenswrapper[31456]: I0312 21:24:49.953681 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:50.415877 master-0 kubenswrapper[31456]: I0312 21:24:50.415770 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ggf7j"] Mar 12 21:24:50.430896 master-0 kubenswrapper[31456]: I0312 21:24:50.430718 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-9444l"] Mar 12 21:24:50.515520 master-0 kubenswrapper[31456]: I0312 21:24:50.513861 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5837dd6c-30f0-4736-a8de-2ddb74041d5e","Type":"ContainerStarted","Data":"a33b89e1fdacf9e4fd6925b00886e1fed08edb976809049b2af774e850e8050b"} Mar 12 21:24:50.520989 master-0 kubenswrapper[31456]: I0312 21:24:50.519666 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c43c65e-4b3a-4a3c-b0bd-b3f3f858469d","Type":"ContainerStarted","Data":"6eab0ffc81556b8c39ed28b1f83145724b966d78343151000ca25b268af649a1"} Mar 12 21:24:50.710358 master-0 kubenswrapper[31456]: W0312 21:24:50.709239 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd31af2fe_bfcd_46ad_b38a_3bc62f43c600.slice/crio-b9658413c566b45ea6d7ea6d3e0ab816ef7ea90aacc6c7d2b53258e8ff8607cd WatchSource:0}: Error finding container b9658413c566b45ea6d7ea6d3e0ab816ef7ea90aacc6c7d2b53258e8ff8607cd: Status 404 returned error can't find the container with id b9658413c566b45ea6d7ea6d3e0ab816ef7ea90aacc6c7d2b53258e8ff8607cd Mar 12 21:24:50.710358 master-0 kubenswrapper[31456]: W0312 21:24:50.709898 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2247af_3efc_43dd_b06b_4ee98d3073c4.slice/crio-89b1f1a0daa48c7ee6cf08ca80cc45e88a9776dd675aa4a18e8bc13d31f85ecf WatchSource:0}: Error finding container 89b1f1a0daa48c7ee6cf08ca80cc45e88a9776dd675aa4a18e8bc13d31f85ecf: Status 404 returned error can't find the container with id 89b1f1a0daa48c7ee6cf08ca80cc45e88a9776dd675aa4a18e8bc13d31f85ecf Mar 12 21:24:50.737559 master-0 kubenswrapper[31456]: I0312 21:24:50.737087 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-4zjpr"] Mar 12 21:24:50.860925 master-0 kubenswrapper[31456]: I0312 21:24:50.842026 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.32628795 podStartE2EDuration="35.842005849s" podCreationTimestamp="2026-03-12 21:24:15 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.496995583 +0000 UTC m=+932.571600911" lastFinishedPulling="2026-03-12 21:24:41.012713492 +0000 UTC m=+942.087318810" observedRunningTime="2026-03-12 21:24:50.824695411 +0000 UTC m=+951.899300739" watchObservedRunningTime="2026-03-12 21:24:50.842005849 +0000 UTC m=+951.916611227" Mar 12 21:24:50.969827 master-0 kubenswrapper[31456]: I0312 21:24:50.963913 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.098206461 podStartE2EDuration="37.963887459s" podCreationTimestamp="2026-03-12 21:24:13 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.202189238 +0000 UTC m=+932.276794576" lastFinishedPulling="2026-03-12 21:24:41.067870246 +0000 UTC m=+942.142475574" observedRunningTime="2026-03-12 21:24:50.893943466 +0000 UTC m=+951.968548804" watchObservedRunningTime="2026-03-12 21:24:50.963887459 +0000 UTC m=+952.038492787" Mar 12 21:24:50.995250 master-0 kubenswrapper[31456]: I0312 21:24:50.990740 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-4zjpr"] Mar 12 21:24:51.029697 master-0 kubenswrapper[31456]: I0312 21:24:51.029636 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-2xtgl"] Mar 12 21:24:51.033453 master-0 kubenswrapper[31456]: I0312 21:24:51.032167 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.087131 master-0 kubenswrapper[31456]: I0312 21:24:51.083883 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-2xtgl"] Mar 12 21:24:51.087131 master-0 kubenswrapper[31456]: I0312 21:24:51.085083 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntvg\" (UniqueName: \"kubernetes.io/projected/9353def4-ea82-4589-9503-c32939b3ff21-kube-api-access-fntvg\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.087131 master-0 kubenswrapper[31456]: I0312 21:24:51.085156 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.087131 master-0 kubenswrapper[31456]: I0312 21:24:51.085227 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.087131 master-0 kubenswrapper[31456]: I0312 21:24:51.085253 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.087131 master-0 kubenswrapper[31456]: I0312 21:24:51.085285 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-config\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.189961 master-0 kubenswrapper[31456]: I0312 21:24:51.189741 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fntvg\" (UniqueName: \"kubernetes.io/projected/9353def4-ea82-4589-9503-c32939b3ff21-kube-api-access-fntvg\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.189961 master-0 kubenswrapper[31456]: I0312 21:24:51.189843 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.191865 master-0 kubenswrapper[31456]: I0312 21:24:51.190158 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.191865 master-0 kubenswrapper[31456]: I0312 21:24:51.190275 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.191865 master-0 kubenswrapper[31456]: I0312 21:24:51.190383 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-config\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.191865 master-0 kubenswrapper[31456]: I0312 21:24:51.191418 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.191865 master-0 kubenswrapper[31456]: I0312 21:24:51.191458 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.191865 master-0 kubenswrapper[31456]: I0312 21:24:51.191762 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.193976 master-0 kubenswrapper[31456]: I0312 21:24:51.193623 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-config\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.215432 master-0 kubenswrapper[31456]: I0312 21:24:51.215197 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fntvg\" (UniqueName: \"kubernetes.io/projected/9353def4-ea82-4589-9503-c32939b3ff21-kube-api-access-fntvg\") pod \"dnsmasq-dns-5bf8b865dc-2xtgl\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.291860 master-0 kubenswrapper[31456]: I0312 21:24:51.291799 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:51.292023 master-0 kubenswrapper[31456]: I0312 21:24:51.291868 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:51.427577 master-0 kubenswrapper[31456]: I0312 21:24:51.427505 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:51.531954 master-0 kubenswrapper[31456]: I0312 21:24:51.529962 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ggf7j" event={"ID":"4c2247af-3efc-43dd-b06b-4ee98d3073c4","Type":"ContainerStarted","Data":"89b1f1a0daa48c7ee6cf08ca80cc45e88a9776dd675aa4a18e8bc13d31f85ecf"} Mar 12 21:24:51.531954 master-0 kubenswrapper[31456]: I0312 21:24:51.531501 31456 generic.go:334] "Generic (PLEG): container finished" podID="b58811ef-40fc-4ced-a940-d236f5ef5677" containerID="9dbf7ec7c0b8ab8bb359bd8c329e6a1db56638f092858192b8ad9710a15a9123" exitCode=0 Mar 12 21:24:51.531954 master-0 kubenswrapper[31456]: I0312 21:24:51.531555 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" event={"ID":"b58811ef-40fc-4ced-a940-d236f5ef5677","Type":"ContainerDied","Data":"9dbf7ec7c0b8ab8bb359bd8c329e6a1db56638f092858192b8ad9710a15a9123"} Mar 12 21:24:51.531954 master-0 kubenswrapper[31456]: I0312 21:24:51.531588 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" event={"ID":"b58811ef-40fc-4ced-a940-d236f5ef5677","Type":"ContainerStarted","Data":"c8cd7934efe010b98670fdb02598e8c5c923eb012d17330174db61b707ff5ece"} Mar 12 21:24:51.553882 master-0 kubenswrapper[31456]: I0312 21:24:51.553632 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"565a1656-5522-446c-95c9-b5cf8218dfef","Type":"ContainerStarted","Data":"4bab492dda0720389735c2f6bcf5197654da001427d14a538871b7ca607b0b5b"} Mar 12 21:24:51.564576 master-0 kubenswrapper[31456]: I0312 21:24:51.564528 31456 generic.go:334] "Generic (PLEG): container finished" podID="d31af2fe-bfcd-46ad-b38a-3bc62f43c600" containerID="ef7816988f6ddc4e2793023c8fc8f9270ed207940fec226af0b6cb13f8335e91" exitCode=0 Mar 12 21:24:51.564778 master-0 kubenswrapper[31456]: I0312 21:24:51.564591 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" event={"ID":"d31af2fe-bfcd-46ad-b38a-3bc62f43c600","Type":"ContainerDied","Data":"ef7816988f6ddc4e2793023c8fc8f9270ed207940fec226af0b6cb13f8335e91"} Mar 12 21:24:51.564778 master-0 kubenswrapper[31456]: I0312 21:24:51.564614 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" event={"ID":"d31af2fe-bfcd-46ad-b38a-3bc62f43c600","Type":"ContainerStarted","Data":"b9658413c566b45ea6d7ea6d3e0ab816ef7ea90aacc6c7d2b53258e8ff8607cd"} Mar 12 21:24:51.588755 master-0 kubenswrapper[31456]: I0312 21:24:51.575519 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b478fbf3-ea22-4c10-b254-6423457cc8dd","Type":"ContainerStarted","Data":"4e673ad4f4364328cbda3f09faec102587448f83c4e184f007a04b2d72a95af5"} Mar 12 21:24:51.706565 master-0 kubenswrapper[31456]: I0312 21:24:51.706480 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.794117467 podStartE2EDuration="30.706448892s" podCreationTimestamp="2026-03-12 21:24:21 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.926433396 +0000 UTC m=+933.001038724" lastFinishedPulling="2026-03-12 21:24:50.838764821 +0000 UTC m=+951.913370149" observedRunningTime="2026-03-12 21:24:51.623109324 +0000 UTC m=+952.697714662" watchObservedRunningTime="2026-03-12 21:24:51.706448892 +0000 UTC m=+952.781054220" Mar 12 21:24:51.725623 master-0 kubenswrapper[31456]: I0312 21:24:51.716421 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.792277967 podStartE2EDuration="26.716396843s" podCreationTimestamp="2026-03-12 21:24:25 +0000 UTC" firstStartedPulling="2026-03-12 21:24:32.901186379 +0000 UTC m=+933.975791707" lastFinishedPulling="2026-03-12 21:24:50.825305255 +0000 UTC m=+951.899910583" observedRunningTime="2026-03-12 21:24:51.692077613 +0000 UTC m=+952.766682941" watchObservedRunningTime="2026-03-12 21:24:51.716396843 +0000 UTC m=+952.791002171" Mar 12 21:24:52.342436 master-0 kubenswrapper[31456]: I0312 21:24:52.342346 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-2xtgl"] Mar 12 21:24:52.345516 master-0 kubenswrapper[31456]: W0312 21:24:52.345481 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9353def4_ea82_4589_9503_c32939b3ff21.slice/crio-bca8f9297364b9de1b80a0f9240b80111913068278344eb8c8396cde388386a0 WatchSource:0}: Error finding container bca8f9297364b9de1b80a0f9240b80111913068278344eb8c8396cde388386a0: Status 404 returned error can't find the container with id bca8f9297364b9de1b80a0f9240b80111913068278344eb8c8396cde388386a0 Mar 12 21:24:52.349845 master-0 kubenswrapper[31456]: I0312 21:24:52.349183 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:52.408299 master-0 kubenswrapper[31456]: I0312 21:24:52.407953 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:52.521772 master-0 kubenswrapper[31456]: I0312 21:24:52.521736 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:52.531652 master-0 kubenswrapper[31456]: I0312 21:24:52.529966 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:52.604982 master-0 kubenswrapper[31456]: I0312 21:24:52.602682 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" event={"ID":"9353def4-ea82-4589-9503-c32939b3ff21","Type":"ContainerStarted","Data":"bca8f9297364b9de1b80a0f9240b80111913068278344eb8c8396cde388386a0"} Mar 12 21:24:52.604982 master-0 kubenswrapper[31456]: I0312 21:24:52.604440 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ggf7j" event={"ID":"4c2247af-3efc-43dd-b06b-4ee98d3073c4","Type":"ContainerStarted","Data":"113a72d4b5a7ccd5770c1256b3321aa523833661d9c8400aa364a583493768ba"} Mar 12 21:24:52.607852 master-0 kubenswrapper[31456]: I0312 21:24:52.606963 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" Mar 12 21:24:52.607852 master-0 kubenswrapper[31456]: I0312 21:24:52.607000 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-4zjpr" event={"ID":"b58811ef-40fc-4ced-a940-d236f5ef5677","Type":"ContainerDied","Data":"c8cd7934efe010b98670fdb02598e8c5c923eb012d17330174db61b707ff5ece"} Mar 12 21:24:52.607852 master-0 kubenswrapper[31456]: I0312 21:24:52.607182 31456 scope.go:117] "RemoveContainer" containerID="9dbf7ec7c0b8ab8bb359bd8c329e6a1db56638f092858192b8ad9710a15a9123" Mar 12 21:24:52.610159 master-0 kubenswrapper[31456]: I0312 21:24:52.609272 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" event={"ID":"d31af2fe-bfcd-46ad-b38a-3bc62f43c600","Type":"ContainerDied","Data":"b9658413c566b45ea6d7ea6d3e0ab816ef7ea90aacc6c7d2b53258e8ff8607cd"} Mar 12 21:24:52.610159 master-0 kubenswrapper[31456]: I0312 21:24:52.609387 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d6ccc4b7-9444l" Mar 12 21:24:52.612396 master-0 kubenswrapper[31456]: I0312 21:24:52.612327 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:52.637003 master-0 kubenswrapper[31456]: I0312 21:24:52.636730 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ggf7j" podStartSLOduration=6.636701736 podStartE2EDuration="6.636701736s" podCreationTimestamp="2026-03-12 21:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:24:52.623923627 +0000 UTC m=+953.698528955" watchObservedRunningTime="2026-03-12 21:24:52.636701736 +0000 UTC m=+953.711307064" Mar 12 21:24:52.641461 master-0 kubenswrapper[31456]: I0312 21:24:52.641392 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcvj\" (UniqueName: \"kubernetes.io/projected/b58811ef-40fc-4ced-a940-d236f5ef5677-kube-api-access-lfcvj\") pod \"b58811ef-40fc-4ced-a940-d236f5ef5677\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " Mar 12 21:24:52.641698 master-0 kubenswrapper[31456]: I0312 21:24:52.641683 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-config\") pod \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " Mar 12 21:24:52.641908 master-0 kubenswrapper[31456]: I0312 21:24:52.641894 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-sb\") pod \"b58811ef-40fc-4ced-a940-d236f5ef5677\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " Mar 12 21:24:52.642025 master-0 kubenswrapper[31456]: I0312 21:24:52.642013 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rbvk\" (UniqueName: \"kubernetes.io/projected/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-kube-api-access-9rbvk\") pod \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " Mar 12 21:24:52.642132 master-0 kubenswrapper[31456]: I0312 21:24:52.642121 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-dns-svc\") pod \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " Mar 12 21:24:52.642220 master-0 kubenswrapper[31456]: I0312 21:24:52.642208 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-config\") pod \"b58811ef-40fc-4ced-a940-d236f5ef5677\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " Mar 12 21:24:52.642357 master-0 kubenswrapper[31456]: I0312 21:24:52.642344 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-dns-svc\") pod \"b58811ef-40fc-4ced-a940-d236f5ef5677\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " Mar 12 21:24:52.642431 master-0 kubenswrapper[31456]: I0312 21:24:52.642421 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-nb\") pod \"b58811ef-40fc-4ced-a940-d236f5ef5677\" (UID: \"b58811ef-40fc-4ced-a940-d236f5ef5677\") " Mar 12 21:24:52.642532 master-0 kubenswrapper[31456]: I0312 21:24:52.642521 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-ovsdbserver-nb\") pod \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\" (UID: \"d31af2fe-bfcd-46ad-b38a-3bc62f43c600\") " Mar 12 21:24:52.664548 master-0 kubenswrapper[31456]: I0312 21:24:52.662278 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-kube-api-access-9rbvk" (OuterVolumeSpecName: "kube-api-access-9rbvk") pod "d31af2fe-bfcd-46ad-b38a-3bc62f43c600" (UID: "d31af2fe-bfcd-46ad-b38a-3bc62f43c600"). InnerVolumeSpecName "kube-api-access-9rbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:24:52.664548 master-0 kubenswrapper[31456]: I0312 21:24:52.662463 31456 scope.go:117] "RemoveContainer" containerID="ef7816988f6ddc4e2793023c8fc8f9270ed207940fec226af0b6cb13f8335e91" Mar 12 21:24:52.667601 master-0 kubenswrapper[31456]: I0312 21:24:52.667546 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d31af2fe-bfcd-46ad-b38a-3bc62f43c600" (UID: "d31af2fe-bfcd-46ad-b38a-3bc62f43c600"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.667794 master-0 kubenswrapper[31456]: I0312 21:24:52.667761 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58811ef-40fc-4ced-a940-d236f5ef5677-kube-api-access-lfcvj" (OuterVolumeSpecName: "kube-api-access-lfcvj") pod "b58811ef-40fc-4ced-a940-d236f5ef5677" (UID: "b58811ef-40fc-4ced-a940-d236f5ef5677"). InnerVolumeSpecName "kube-api-access-lfcvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:24:52.668668 master-0 kubenswrapper[31456]: I0312 21:24:52.668629 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 21:24:52.682779 master-0 kubenswrapper[31456]: I0312 21:24:52.682716 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-config" (OuterVolumeSpecName: "config") pod "d31af2fe-bfcd-46ad-b38a-3bc62f43c600" (UID: "d31af2fe-bfcd-46ad-b38a-3bc62f43c600"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.685390 master-0 kubenswrapper[31456]: I0312 21:24:52.685346 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b58811ef-40fc-4ced-a940-d236f5ef5677" (UID: "b58811ef-40fc-4ced-a940-d236f5ef5677"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.690385 master-0 kubenswrapper[31456]: I0312 21:24:52.690295 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-config" (OuterVolumeSpecName: "config") pod "b58811ef-40fc-4ced-a940-d236f5ef5677" (UID: "b58811ef-40fc-4ced-a940-d236f5ef5677"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.715621 master-0 kubenswrapper[31456]: I0312 21:24:52.715516 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b58811ef-40fc-4ced-a940-d236f5ef5677" (UID: "b58811ef-40fc-4ced-a940-d236f5ef5677"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.736472 master-0 kubenswrapper[31456]: I0312 21:24:52.736413 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d31af2fe-bfcd-46ad-b38a-3bc62f43c600" (UID: "d31af2fe-bfcd-46ad-b38a-3bc62f43c600"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.736784 master-0 kubenswrapper[31456]: I0312 21:24:52.736741 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b58811ef-40fc-4ced-a940-d236f5ef5677" (UID: "b58811ef-40fc-4ced-a940-d236f5ef5677"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:24:52.745528 master-0 kubenswrapper[31456]: I0312 21:24:52.745453 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.745528 master-0 kubenswrapper[31456]: I0312 21:24:52.745486 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.745528 master-0 kubenswrapper[31456]: I0312 21:24:52.745495 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.746026 master-0 kubenswrapper[31456]: I0312 21:24:52.745504 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.746381 master-0 kubenswrapper[31456]: I0312 21:24:52.746087 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.746381 master-0 kubenswrapper[31456]: I0312 21:24:52.746103 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcvj\" (UniqueName: \"kubernetes.io/projected/b58811ef-40fc-4ced-a940-d236f5ef5677-kube-api-access-lfcvj\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.746381 master-0 kubenswrapper[31456]: I0312 21:24:52.746113 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.746381 master-0 kubenswrapper[31456]: I0312 21:24:52.746121 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b58811ef-40fc-4ced-a940-d236f5ef5677-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.746381 master-0 kubenswrapper[31456]: I0312 21:24:52.746131 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rbvk\" (UniqueName: \"kubernetes.io/projected/d31af2fe-bfcd-46ad-b38a-3bc62f43c600-kube-api-access-9rbvk\") on node \"master-0\" DevicePath \"\"" Mar 12 21:24:52.931487 master-0 kubenswrapper[31456]: I0312 21:24:52.931357 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 21:24:52.931820 master-0 kubenswrapper[31456]: E0312 21:24:52.931736 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58811ef-40fc-4ced-a940-d236f5ef5677" containerName="init" Mar 12 21:24:52.931820 master-0 kubenswrapper[31456]: I0312 21:24:52.931755 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58811ef-40fc-4ced-a940-d236f5ef5677" containerName="init" Mar 12 21:24:52.931820 master-0 kubenswrapper[31456]: E0312 21:24:52.931790 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d31af2fe-bfcd-46ad-b38a-3bc62f43c600" containerName="init" Mar 12 21:24:52.931820 master-0 kubenswrapper[31456]: I0312 21:24:52.931798 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d31af2fe-bfcd-46ad-b38a-3bc62f43c600" containerName="init" Mar 12 21:24:52.933708 master-0 kubenswrapper[31456]: I0312 21:24:52.931986 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58811ef-40fc-4ced-a940-d236f5ef5677" containerName="init" Mar 12 21:24:52.933708 master-0 kubenswrapper[31456]: I0312 21:24:52.932027 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d31af2fe-bfcd-46ad-b38a-3bc62f43c600" containerName="init" Mar 12 21:24:52.956634 master-0 kubenswrapper[31456]: I0312 21:24:52.946599 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 21:24:52.956634 master-0 kubenswrapper[31456]: I0312 21:24:52.952229 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 21:24:52.956634 master-0 kubenswrapper[31456]: I0312 21:24:52.954005 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 21:24:52.956634 master-0 kubenswrapper[31456]: I0312 21:24:52.954132 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 21:24:52.964228 master-0 kubenswrapper[31456]: I0312 21:24:52.964048 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 21:24:52.996836 master-0 kubenswrapper[31456]: I0312 21:24:52.992065 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:53.052303 master-0 kubenswrapper[31456]: I0312 21:24:53.045949 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:53.060097 master-0 kubenswrapper[31456]: I0312 21:24:53.060047 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7478f62f-dba4-43cb-9a5b-556b235bb13f-cache\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.060217 master-0 kubenswrapper[31456]: I0312 21:24:53.060131 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxg4\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-kube-api-access-vcxg4\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.060217 master-0 kubenswrapper[31456]: I0312 21:24:53.060173 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478f62f-dba4-43cb-9a5b-556b235bb13f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.060288 master-0 kubenswrapper[31456]: I0312 21:24:53.060217 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.060288 master-0 kubenswrapper[31456]: I0312 21:24:53.060266 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7478f62f-dba4-43cb-9a5b-556b235bb13f-lock\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.060359 master-0 kubenswrapper[31456]: I0312 21:24:53.060314 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d57f5a0-de36-45e3-a7af-bc4130ba70d8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^223859d7-ef07-4fb9-883c-f8a7843443fd\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.071843 master-0 kubenswrapper[31456]: I0312 21:24:53.071744 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-9444l"] Mar 12 21:24:53.083556 master-0 kubenswrapper[31456]: I0312 21:24:53.083256 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79d6ccc4b7-9444l"] Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.174190 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7478f62f-dba4-43cb-9a5b-556b235bb13f-cache\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.174297 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxg4\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-kube-api-access-vcxg4\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.174341 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478f62f-dba4-43cb-9a5b-556b235bb13f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.174386 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.174438 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7478f62f-dba4-43cb-9a5b-556b235bb13f-lock\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.174486 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9d57f5a0-de36-45e3-a7af-bc4130ba70d8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^223859d7-ef07-4fb9-883c-f8a7843443fd\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.175292 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7478f62f-dba4-43cb-9a5b-556b235bb13f-cache\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: I0312 21:24:53.177297 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7478f62f-dba4-43cb-9a5b-556b235bb13f-lock\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: E0312 21:24:53.177417 31456 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: E0312 21:24:53.177430 31456 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 21:24:53.186911 master-0 kubenswrapper[31456]: E0312 21:24:53.177473 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift podName:7478f62f-dba4-43cb-9a5b-556b235bb13f nodeName:}" failed. No retries permitted until 2026-03-12 21:24:53.677457404 +0000 UTC m=+954.752062732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift") pod "swift-storage-0" (UID: "7478f62f-dba4-43cb-9a5b-556b235bb13f") : configmap "swift-ring-files" not found Mar 12 21:24:53.200453 master-0 kubenswrapper[31456]: I0312 21:24:53.197494 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:24:53.200453 master-0 kubenswrapper[31456]: I0312 21:24:53.197545 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9d57f5a0-de36-45e3-a7af-bc4130ba70d8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^223859d7-ef07-4fb9-883c-f8a7843443fd\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/9879e28d8248d11241d8d6f7df0c6ef46a9dc16c5a4355af379fc34469f3a7e6/globalmount\"" pod="openstack/swift-storage-0" Mar 12 21:24:53.208783 master-0 kubenswrapper[31456]: I0312 21:24:53.205708 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7478f62f-dba4-43cb-9a5b-556b235bb13f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.210577 master-0 kubenswrapper[31456]: I0312 21:24:53.209993 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxg4\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-kube-api-access-vcxg4\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.222653 master-0 kubenswrapper[31456]: I0312 21:24:53.222595 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31af2fe-bfcd-46ad-b38a-3bc62f43c600" path="/var/lib/kubelet/pods/d31af2fe-bfcd-46ad-b38a-3bc62f43c600/volumes" Mar 12 21:24:53.223376 master-0 kubenswrapper[31456]: I0312 21:24:53.223342 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-4zjpr"] Mar 12 21:24:53.236563 master-0 kubenswrapper[31456]: I0312 21:24:53.235906 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-4zjpr"] Mar 12 21:24:53.623954 master-0 kubenswrapper[31456]: I0312 21:24:53.623885 31456 generic.go:334] "Generic (PLEG): container finished" podID="9353def4-ea82-4589-9503-c32939b3ff21" containerID="9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea" exitCode=0 Mar 12 21:24:53.624624 master-0 kubenswrapper[31456]: I0312 21:24:53.624078 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" event={"ID":"9353def4-ea82-4589-9503-c32939b3ff21","Type":"ContainerDied","Data":"9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea"} Mar 12 21:24:53.624624 master-0 kubenswrapper[31456]: I0312 21:24:53.624220 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:53.688178 master-0 kubenswrapper[31456]: I0312 21:24:53.688125 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:53.688397 master-0 kubenswrapper[31456]: E0312 21:24:53.688281 31456 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 21:24:53.688397 master-0 kubenswrapper[31456]: E0312 21:24:53.688387 31456 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 21:24:53.688494 master-0 kubenswrapper[31456]: E0312 21:24:53.688447 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift podName:7478f62f-dba4-43cb-9a5b-556b235bb13f nodeName:}" failed. No retries permitted until 2026-03-12 21:24:54.688424721 +0000 UTC m=+955.763030049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift") pod "swift-storage-0" (UID: "7478f62f-dba4-43cb-9a5b-556b235bb13f") : configmap "swift-ring-files" not found Mar 12 21:24:53.689603 master-0 kubenswrapper[31456]: I0312 21:24:53.689546 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 21:24:53.819658 master-0 kubenswrapper[31456]: I0312 21:24:53.819576 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bmrkm"] Mar 12 21:24:53.823968 master-0 kubenswrapper[31456]: I0312 21:24:53.823454 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:53.826829 master-0 kubenswrapper[31456]: I0312 21:24:53.826770 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 21:24:53.826963 master-0 kubenswrapper[31456]: I0312 21:24:53.826770 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 21:24:53.827904 master-0 kubenswrapper[31456]: I0312 21:24:53.827705 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 21:24:53.837787 master-0 kubenswrapper[31456]: I0312 21:24:53.837113 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bmrkm"] Mar 12 21:24:54.052949 master-0 kubenswrapper[31456]: I0312 21:24:54.052853 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a795afb6-d746-400b-82ef-35cca567821f-etc-swift\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.052949 master-0 kubenswrapper[31456]: I0312 21:24:54.052946 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pj9\" (UniqueName: \"kubernetes.io/projected/a795afb6-d746-400b-82ef-35cca567821f-kube-api-access-p2pj9\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.056101 master-0 kubenswrapper[31456]: I0312 21:24:54.053485 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.056101 master-0 kubenswrapper[31456]: I0312 21:24:54.053617 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-dispersionconf\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.056101 master-0 kubenswrapper[31456]: I0312 21:24:54.053715 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-scripts\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.056101 master-0 kubenswrapper[31456]: I0312 21:24:54.053929 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-swiftconf\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.056101 master-0 kubenswrapper[31456]: I0312 21:24:54.054062 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-ring-data-devices\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.106851 master-0 kubenswrapper[31456]: I0312 21:24:54.102511 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 21:24:54.106851 master-0 kubenswrapper[31456]: I0312 21:24:54.106288 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 21:24:54.111361 master-0 kubenswrapper[31456]: I0312 21:24:54.111315 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 21:24:54.112118 master-0 kubenswrapper[31456]: I0312 21:24:54.111515 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 21:24:54.129893 master-0 kubenswrapper[31456]: I0312 21:24:54.120044 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 21:24:54.154039 master-0 kubenswrapper[31456]: I0312 21:24:54.142425 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 21:24:54.157322 master-0 kubenswrapper[31456]: I0312 21:24:54.157258 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-swiftconf\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.170949 master-0 kubenswrapper[31456]: I0312 21:24:54.170867 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-ring-data-devices\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.172987 master-0 kubenswrapper[31456]: I0312 21:24:54.171402 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a795afb6-d746-400b-82ef-35cca567821f-etc-swift\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.172987 master-0 kubenswrapper[31456]: I0312 21:24:54.172390 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pj9\" (UniqueName: \"kubernetes.io/projected/a795afb6-d746-400b-82ef-35cca567821f-kube-api-access-p2pj9\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.172987 master-0 kubenswrapper[31456]: I0312 21:24:54.172722 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.172987 master-0 kubenswrapper[31456]: I0312 21:24:54.172836 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-dispersionconf\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.172987 master-0 kubenswrapper[31456]: I0312 21:24:54.172926 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-scripts\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.177869 master-0 kubenswrapper[31456]: I0312 21:24:54.174281 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-scripts\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.177869 master-0 kubenswrapper[31456]: I0312 21:24:54.176225 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a795afb6-d746-400b-82ef-35cca567821f-etc-swift\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.177869 master-0 kubenswrapper[31456]: I0312 21:24:54.176735 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-ring-data-devices\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.181489 master-0 kubenswrapper[31456]: I0312 21:24:54.180976 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-dispersionconf\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.232836 master-0 kubenswrapper[31456]: I0312 21:24:54.212174 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-swiftconf\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.232836 master-0 kubenswrapper[31456]: I0312 21:24:54.218517 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.232836 master-0 kubenswrapper[31456]: I0312 21:24:54.219114 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pj9\" (UniqueName: \"kubernetes.io/projected/a795afb6-d746-400b-82ef-35cca567821f-kube-api-access-p2pj9\") pod \"swift-ring-rebalance-bmrkm\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.282866 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.282960 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-config\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.282999 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-scripts\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.283038 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.283111 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flkpf\" (UniqueName: \"kubernetes.io/projected/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-kube-api-access-flkpf\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.283138 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.283833 master-0 kubenswrapper[31456]: I0312 21:24:54.283200 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385303 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flkpf\" (UniqueName: \"kubernetes.io/projected/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-kube-api-access-flkpf\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385375 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385430 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385556 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385587 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-config\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385632 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-scripts\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.387902 master-0 kubenswrapper[31456]: I0312 21:24:54.385653 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.396825 master-0 kubenswrapper[31456]: I0312 21:24:54.390012 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-config\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.396825 master-0 kubenswrapper[31456]: I0312 21:24:54.392898 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-scripts\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.396825 master-0 kubenswrapper[31456]: I0312 21:24:54.393231 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.396825 master-0 kubenswrapper[31456]: I0312 21:24:54.393495 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.409830 master-0 kubenswrapper[31456]: I0312 21:24:54.404239 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.419373 master-0 kubenswrapper[31456]: I0312 21:24:54.419326 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.425605 master-0 kubenswrapper[31456]: I0312 21:24:54.425557 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flkpf\" (UniqueName: \"kubernetes.io/projected/7b3d9706-d6e2-4af6-87de-0bf2a20b9438-kube-api-access-flkpf\") pod \"ovn-northd-0\" (UID: \"7b3d9706-d6e2-4af6-87de-0bf2a20b9438\") " pod="openstack/ovn-northd-0" Mar 12 21:24:54.460615 master-0 kubenswrapper[31456]: I0312 21:24:54.460542 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 21:24:54.482419 master-0 kubenswrapper[31456]: I0312 21:24:54.482373 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:24:54.637729 master-0 kubenswrapper[31456]: I0312 21:24:54.635995 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9d57f5a0-de36-45e3-a7af-bc4130ba70d8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^223859d7-ef07-4fb9-883c-f8a7843443fd\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:54.697030 master-0 kubenswrapper[31456]: E0312 21:24:54.692853 31456 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 21:24:54.697030 master-0 kubenswrapper[31456]: E0312 21:24:54.692898 31456 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 21:24:54.697030 master-0 kubenswrapper[31456]: E0312 21:24:54.692953 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift podName:7478f62f-dba4-43cb-9a5b-556b235bb13f nodeName:}" failed. No retries permitted until 2026-03-12 21:24:56.692935954 +0000 UTC m=+957.767541282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift") pod "swift-storage-0" (UID: "7478f62f-dba4-43cb-9a5b-556b235bb13f") : configmap "swift-ring-files" not found Mar 12 21:24:54.697030 master-0 kubenswrapper[31456]: I0312 21:24:54.695251 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:54.708232 master-0 kubenswrapper[31456]: I0312 21:24:54.707972 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" event={"ID":"9353def4-ea82-4589-9503-c32939b3ff21","Type":"ContainerStarted","Data":"e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755"} Mar 12 21:24:54.709874 master-0 kubenswrapper[31456]: I0312 21:24:54.709623 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:24:54.740849 master-0 kubenswrapper[31456]: I0312 21:24:54.740724 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" podStartSLOduration=4.7407032000000005 podStartE2EDuration="4.7407032s" podCreationTimestamp="2026-03-12 21:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:24:54.731581249 +0000 UTC m=+955.806186597" watchObservedRunningTime="2026-03-12 21:24:54.7407032 +0000 UTC m=+955.815308528" Mar 12 21:24:54.969602 master-0 kubenswrapper[31456]: I0312 21:24:54.968981 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 21:24:54.972247 master-0 kubenswrapper[31456]: W0312 21:24:54.972213 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b3d9706_d6e2_4af6_87de_0bf2a20b9438.slice/crio-96da774478e1e19855a778da801ac259e6a471d3c22ca48ac4632a48207e0230 WatchSource:0}: Error finding container 96da774478e1e19855a778da801ac259e6a471d3c22ca48ac4632a48207e0230: Status 404 returned error can't find the container with id 96da774478e1e19855a778da801ac259e6a471d3c22ca48ac4632a48207e0230 Mar 12 21:24:55.151994 master-0 kubenswrapper[31456]: W0312 21:24:55.151551 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda795afb6_d746_400b_82ef_35cca567821f.slice/crio-443b1ea0fad05e406d2bcf7d777d93e65745cedb19fb128e200177c09a850898 WatchSource:0}: Error finding container 443b1ea0fad05e406d2bcf7d777d93e65745cedb19fb128e200177c09a850898: Status 404 returned error can't find the container with id 443b1ea0fad05e406d2bcf7d777d93e65745cedb19fb128e200177c09a850898 Mar 12 21:24:55.160473 master-0 kubenswrapper[31456]: I0312 21:24:55.160377 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bmrkm"] Mar 12 21:24:55.182315 master-0 kubenswrapper[31456]: I0312 21:24:55.182251 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58811ef-40fc-4ced-a940-d236f5ef5677" path="/var/lib/kubelet/pods/b58811ef-40fc-4ced-a940-d236f5ef5677/volumes" Mar 12 21:24:55.723565 master-0 kubenswrapper[31456]: I0312 21:24:55.723455 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bmrkm" event={"ID":"a795afb6-d746-400b-82ef-35cca567821f","Type":"ContainerStarted","Data":"443b1ea0fad05e406d2bcf7d777d93e65745cedb19fb128e200177c09a850898"} Mar 12 21:24:55.725182 master-0 kubenswrapper[31456]: I0312 21:24:55.725127 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7b3d9706-d6e2-4af6-87de-0bf2a20b9438","Type":"ContainerStarted","Data":"96da774478e1e19855a778da801ac259e6a471d3c22ca48ac4632a48207e0230"} Mar 12 21:24:56.740293 master-0 kubenswrapper[31456]: I0312 21:24:56.740151 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7b3d9706-d6e2-4af6-87de-0bf2a20b9438","Type":"ContainerStarted","Data":"083bac5c45b933afa71be4df0099c04d9b39d28d9e18066eaa6b844328ad074c"} Mar 12 21:24:56.740293 master-0 kubenswrapper[31456]: I0312 21:24:56.740210 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7b3d9706-d6e2-4af6-87de-0bf2a20b9438","Type":"ContainerStarted","Data":"c1cfd53a537a7952e63be6fc714836c5f4f48263f20c89982860d79a8c6e661f"} Mar 12 21:24:56.743882 master-0 kubenswrapper[31456]: I0312 21:24:56.743801 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:24:56.744026 master-0 kubenswrapper[31456]: E0312 21:24:56.743969 31456 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 21:24:56.744026 master-0 kubenswrapper[31456]: E0312 21:24:56.744023 31456 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 21:24:56.744138 master-0 kubenswrapper[31456]: E0312 21:24:56.744083 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift podName:7478f62f-dba4-43cb-9a5b-556b235bb13f nodeName:}" failed. No retries permitted until 2026-03-12 21:25:00.744066556 +0000 UTC m=+961.818671874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift") pod "swift-storage-0" (UID: "7478f62f-dba4-43cb-9a5b-556b235bb13f") : configmap "swift-ring-files" not found Mar 12 21:24:56.790618 master-0 kubenswrapper[31456]: I0312 21:24:56.790445 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.572996295 podStartE2EDuration="2.790403139s" podCreationTimestamp="2026-03-12 21:24:54 +0000 UTC" firstStartedPulling="2026-03-12 21:24:54.974611801 +0000 UTC m=+956.049217129" lastFinishedPulling="2026-03-12 21:24:56.192018645 +0000 UTC m=+957.266623973" observedRunningTime="2026-03-12 21:24:56.778274415 +0000 UTC m=+957.852879743" watchObservedRunningTime="2026-03-12 21:24:56.790403139 +0000 UTC m=+957.865008467" Mar 12 21:24:57.441512 master-0 kubenswrapper[31456]: I0312 21:24:57.440399 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:57.580276 master-0 kubenswrapper[31456]: I0312 21:24:57.580199 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 21:24:57.754211 master-0 kubenswrapper[31456]: I0312 21:24:57.754099 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 21:24:59.774976 master-0 kubenswrapper[31456]: I0312 21:24:59.774910 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bmrkm" event={"ID":"a795afb6-d746-400b-82ef-35cca567821f","Type":"ContainerStarted","Data":"183906af6aa75096ad94c8d403c382af389ded094af602f51caa0257c7664446"} Mar 12 21:24:59.794259 master-0 kubenswrapper[31456]: I0312 21:24:59.794184 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bmrkm" podStartSLOduration=3.111437345 podStartE2EDuration="6.794165268s" podCreationTimestamp="2026-03-12 21:24:53 +0000 UTC" firstStartedPulling="2026-03-12 21:24:55.154267969 +0000 UTC m=+956.228873297" lastFinishedPulling="2026-03-12 21:24:58.836995882 +0000 UTC m=+959.911601220" observedRunningTime="2026-03-12 21:24:59.792154359 +0000 UTC m=+960.866759697" watchObservedRunningTime="2026-03-12 21:24:59.794165268 +0000 UTC m=+960.868770606" Mar 12 21:25:00.266044 master-0 kubenswrapper[31456]: I0312 21:25:00.265967 31456 trace.go:236] Trace[1938154868]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (12-Mar-2026 21:24:59.169) (total time: 1096ms): Mar 12 21:25:00.266044 master-0 kubenswrapper[31456]: Trace[1938154868]: [1.096672583s] [1.096672583s] END Mar 12 21:25:00.356111 master-0 kubenswrapper[31456]: I0312 21:25:00.356046 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 21:25:00.356478 master-0 kubenswrapper[31456]: I0312 21:25:00.356452 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 21:25:00.508860 master-0 kubenswrapper[31456]: I0312 21:25:00.508576 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 21:25:00.751186 master-0 kubenswrapper[31456]: I0312 21:25:00.751112 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:25:00.751866 master-0 kubenswrapper[31456]: E0312 21:25:00.751785 31456 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 21:25:00.752024 master-0 kubenswrapper[31456]: E0312 21:25:00.752003 31456 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 21:25:00.752291 master-0 kubenswrapper[31456]: E0312 21:25:00.752267 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift podName:7478f62f-dba4-43cb-9a5b-556b235bb13f nodeName:}" failed. No retries permitted until 2026-03-12 21:25:08.752189605 +0000 UTC m=+969.826794973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift") pod "swift-storage-0" (UID: "7478f62f-dba4-43cb-9a5b-556b235bb13f") : configmap "swift-ring-files" not found Mar 12 21:25:00.883662 master-0 kubenswrapper[31456]: I0312 21:25:00.883607 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 21:25:01.429905 master-0 kubenswrapper[31456]: I0312 21:25:01.429845 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:25:01.603961 master-0 kubenswrapper[31456]: I0312 21:25:01.602324 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg"] Mar 12 21:25:01.603961 master-0 kubenswrapper[31456]: I0312 21:25:01.602565 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerName="dnsmasq-dns" containerID="cri-o://8d568c73550fd753b9852eb8b9af7a83bf61470823e5d8f67643a9a57bd482d1" gracePeriod=10 Mar 12 21:25:01.800113 master-0 kubenswrapper[31456]: I0312 21:25:01.799753 31456 generic.go:334] "Generic (PLEG): container finished" podID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerID="8d568c73550fd753b9852eb8b9af7a83bf61470823e5d8f67643a9a57bd482d1" exitCode=0 Mar 12 21:25:01.800882 master-0 kubenswrapper[31456]: I0312 21:25:01.800851 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" event={"ID":"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70","Type":"ContainerDied","Data":"8d568c73550fd753b9852eb8b9af7a83bf61470823e5d8f67643a9a57bd482d1"} Mar 12 21:25:02.223639 master-0 kubenswrapper[31456]: I0312 21:25:02.223556 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:25:02.384915 master-0 kubenswrapper[31456]: I0312 21:25:02.384868 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k78h4\" (UniqueName: \"kubernetes.io/projected/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-kube-api-access-k78h4\") pod \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " Mar 12 21:25:02.385119 master-0 kubenswrapper[31456]: I0312 21:25:02.385061 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-config\") pod \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " Mar 12 21:25:02.385168 master-0 kubenswrapper[31456]: I0312 21:25:02.385154 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-dns-svc\") pod \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\" (UID: \"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70\") " Mar 12 21:25:02.392170 master-0 kubenswrapper[31456]: I0312 21:25:02.392023 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-kube-api-access-k78h4" (OuterVolumeSpecName: "kube-api-access-k78h4") pod "56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" (UID: "56fd9b51-3ae1-48e0-8966-0e18e5ce9b70"). InnerVolumeSpecName "kube-api-access-k78h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:02.438397 master-0 kubenswrapper[31456]: I0312 21:25:02.438324 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-config" (OuterVolumeSpecName: "config") pod "56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" (UID: "56fd9b51-3ae1-48e0-8966-0e18e5ce9b70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:02.438846 master-0 kubenswrapper[31456]: I0312 21:25:02.438757 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" (UID: "56fd9b51-3ae1-48e0-8966-0e18e5ce9b70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:02.488248 master-0 kubenswrapper[31456]: I0312 21:25:02.488157 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:02.488248 master-0 kubenswrapper[31456]: I0312 21:25:02.488222 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:02.488248 master-0 kubenswrapper[31456]: I0312 21:25:02.488237 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k78h4\" (UniqueName: \"kubernetes.io/projected/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70-kube-api-access-k78h4\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:02.815053 master-0 kubenswrapper[31456]: I0312 21:25:02.814716 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" Mar 12 21:25:02.815865 master-0 kubenswrapper[31456]: I0312 21:25:02.815566 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg" event={"ID":"56fd9b51-3ae1-48e0-8966-0e18e5ce9b70","Type":"ContainerDied","Data":"5bf08e0ea69de1e64801266ffaf044c31b5694932859dc1cf44a81242f31638a"} Mar 12 21:25:02.815865 master-0 kubenswrapper[31456]: I0312 21:25:02.815617 31456 scope.go:117] "RemoveContainer" containerID="8d568c73550fd753b9852eb8b9af7a83bf61470823e5d8f67643a9a57bd482d1" Mar 12 21:25:02.877087 master-0 kubenswrapper[31456]: I0312 21:25:02.877043 31456 scope.go:117] "RemoveContainer" containerID="484d7b4b3c37c873cac6c6781c4200669d52678c41616f424b5e67549076da9d" Mar 12 21:25:02.925039 master-0 kubenswrapper[31456]: I0312 21:25:02.924966 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg"] Mar 12 21:25:03.057553 master-0 kubenswrapper[31456]: I0312 21:25:03.057455 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-t8rdg"] Mar 12 21:25:03.186577 master-0 kubenswrapper[31456]: I0312 21:25:03.186399 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" path="/var/lib/kubelet/pods/56fd9b51-3ae1-48e0-8966-0e18e5ce9b70/volumes" Mar 12 21:25:04.174887 master-0 kubenswrapper[31456]: I0312 21:25:04.174797 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gwk6j"] Mar 12 21:25:04.175544 master-0 kubenswrapper[31456]: E0312 21:25:04.175249 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerName="dnsmasq-dns" Mar 12 21:25:04.175544 master-0 kubenswrapper[31456]: I0312 21:25:04.175262 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerName="dnsmasq-dns" Mar 12 21:25:04.175544 master-0 kubenswrapper[31456]: E0312 21:25:04.175287 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerName="init" Mar 12 21:25:04.175544 master-0 kubenswrapper[31456]: I0312 21:25:04.175293 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerName="init" Mar 12 21:25:04.175544 master-0 kubenswrapper[31456]: I0312 21:25:04.175479 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fd9b51-3ae1-48e0-8966-0e18e5ce9b70" containerName="dnsmasq-dns" Mar 12 21:25:04.176131 master-0 kubenswrapper[31456]: I0312 21:25:04.176094 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.178170 master-0 kubenswrapper[31456]: I0312 21:25:04.178132 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 21:25:04.192304 master-0 kubenswrapper[31456]: I0312 21:25:04.192255 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gwk6j"] Mar 12 21:25:04.333632 master-0 kubenswrapper[31456]: I0312 21:25:04.333342 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7727\" (UniqueName: \"kubernetes.io/projected/007bf3d3-2855-42e4-b137-0eaef917bf0b-kube-api-access-n7727\") pod \"root-account-create-update-gwk6j\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.333632 master-0 kubenswrapper[31456]: I0312 21:25:04.333453 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007bf3d3-2855-42e4-b137-0eaef917bf0b-operator-scripts\") pod \"root-account-create-update-gwk6j\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.436596 master-0 kubenswrapper[31456]: I0312 21:25:04.436449 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7727\" (UniqueName: \"kubernetes.io/projected/007bf3d3-2855-42e4-b137-0eaef917bf0b-kube-api-access-n7727\") pod \"root-account-create-update-gwk6j\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.437138 master-0 kubenswrapper[31456]: I0312 21:25:04.437099 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007bf3d3-2855-42e4-b137-0eaef917bf0b-operator-scripts\") pod \"root-account-create-update-gwk6j\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.438132 master-0 kubenswrapper[31456]: I0312 21:25:04.438081 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007bf3d3-2855-42e4-b137-0eaef917bf0b-operator-scripts\") pod \"root-account-create-update-gwk6j\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.469852 master-0 kubenswrapper[31456]: I0312 21:25:04.469765 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7727\" (UniqueName: \"kubernetes.io/projected/007bf3d3-2855-42e4-b137-0eaef917bf0b-kube-api-access-n7727\") pod \"root-account-create-update-gwk6j\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:04.491232 master-0 kubenswrapper[31456]: I0312 21:25:04.491112 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:05.073901 master-0 kubenswrapper[31456]: I0312 21:25:05.072861 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gwk6j"] Mar 12 21:25:05.087539 master-0 kubenswrapper[31456]: W0312 21:25:05.087452 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod007bf3d3_2855_42e4_b137_0eaef917bf0b.slice/crio-06ceb82d6394520e026c44ad87a95652af9714d87c361204c9d4a3ed375d22ed WatchSource:0}: Error finding container 06ceb82d6394520e026c44ad87a95652af9714d87c361204c9d4a3ed375d22ed: Status 404 returned error can't find the container with id 06ceb82d6394520e026c44ad87a95652af9714d87c361204c9d4a3ed375d22ed Mar 12 21:25:05.866033 master-0 kubenswrapper[31456]: I0312 21:25:05.865935 31456 generic.go:334] "Generic (PLEG): container finished" podID="007bf3d3-2855-42e4-b137-0eaef917bf0b" containerID="a35ebfcc2709827b1180ef73b6afd5b353b8cdf853d06cdb7a17e961e08a7eac" exitCode=0 Mar 12 21:25:05.866033 master-0 kubenswrapper[31456]: I0312 21:25:05.866010 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gwk6j" event={"ID":"007bf3d3-2855-42e4-b137-0eaef917bf0b","Type":"ContainerDied","Data":"a35ebfcc2709827b1180ef73b6afd5b353b8cdf853d06cdb7a17e961e08a7eac"} Mar 12 21:25:05.866033 master-0 kubenswrapper[31456]: I0312 21:25:05.866037 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gwk6j" event={"ID":"007bf3d3-2855-42e4-b137-0eaef917bf0b","Type":"ContainerStarted","Data":"06ceb82d6394520e026c44ad87a95652af9714d87c361204c9d4a3ed375d22ed"} Mar 12 21:25:05.868921 master-0 kubenswrapper[31456]: I0312 21:25:05.868281 31456 generic.go:334] "Generic (PLEG): container finished" podID="a795afb6-d746-400b-82ef-35cca567821f" containerID="183906af6aa75096ad94c8d403c382af389ded094af602f51caa0257c7664446" exitCode=0 Mar 12 21:25:05.868921 master-0 kubenswrapper[31456]: I0312 21:25:05.868328 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bmrkm" event={"ID":"a795afb6-d746-400b-82ef-35cca567821f","Type":"ContainerDied","Data":"183906af6aa75096ad94c8d403c382af389ded094af602f51caa0257c7664446"} Mar 12 21:25:07.614033 master-0 kubenswrapper[31456]: I0312 21:25:07.613640 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:07.703225 master-0 kubenswrapper[31456]: I0312 21:25:07.703168 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:25:07.734543 master-0 kubenswrapper[31456]: I0312 21:25:07.734469 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7727\" (UniqueName: \"kubernetes.io/projected/007bf3d3-2855-42e4-b137-0eaef917bf0b-kube-api-access-n7727\") pod \"007bf3d3-2855-42e4-b137-0eaef917bf0b\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " Mar 12 21:25:07.734848 master-0 kubenswrapper[31456]: I0312 21:25:07.734801 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007bf3d3-2855-42e4-b137-0eaef917bf0b-operator-scripts\") pod \"007bf3d3-2855-42e4-b137-0eaef917bf0b\" (UID: \"007bf3d3-2855-42e4-b137-0eaef917bf0b\") " Mar 12 21:25:07.736472 master-0 kubenswrapper[31456]: I0312 21:25:07.736421 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007bf3d3-2855-42e4-b137-0eaef917bf0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "007bf3d3-2855-42e4-b137-0eaef917bf0b" (UID: "007bf3d3-2855-42e4-b137-0eaef917bf0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:07.755660 master-0 kubenswrapper[31456]: I0312 21:25:07.755464 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007bf3d3-2855-42e4-b137-0eaef917bf0b-kube-api-access-n7727" (OuterVolumeSpecName: "kube-api-access-n7727") pod "007bf3d3-2855-42e4-b137-0eaef917bf0b" (UID: "007bf3d3-2855-42e4-b137-0eaef917bf0b"). InnerVolumeSpecName "kube-api-access-n7727". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:07.837584 master-0 kubenswrapper[31456]: I0312 21:25:07.837526 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a795afb6-d746-400b-82ef-35cca567821f-etc-swift\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.837584 master-0 kubenswrapper[31456]: I0312 21:25:07.837575 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-swiftconf\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.837904 master-0 kubenswrapper[31456]: I0312 21:25:07.837640 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.837904 master-0 kubenswrapper[31456]: I0312 21:25:07.837701 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2pj9\" (UniqueName: \"kubernetes.io/projected/a795afb6-d746-400b-82ef-35cca567821f-kube-api-access-p2pj9\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.837904 master-0 kubenswrapper[31456]: I0312 21:25:07.837740 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-dispersionconf\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.837904 master-0 kubenswrapper[31456]: I0312 21:25:07.837759 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-scripts\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.837904 master-0 kubenswrapper[31456]: I0312 21:25:07.837783 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-ring-data-devices\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:07.838295 master-0 kubenswrapper[31456]: I0312 21:25:07.838270 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007bf3d3-2855-42e4-b137-0eaef917bf0b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.838295 master-0 kubenswrapper[31456]: I0312 21:25:07.838291 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7727\" (UniqueName: \"kubernetes.io/projected/007bf3d3-2855-42e4-b137-0eaef917bf0b-kube-api-access-n7727\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.838587 master-0 kubenswrapper[31456]: I0312 21:25:07.838526 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:07.839127 master-0 kubenswrapper[31456]: I0312 21:25:07.839054 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a795afb6-d746-400b-82ef-35cca567821f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:25:07.843563 master-0 kubenswrapper[31456]: I0312 21:25:07.843519 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a795afb6-d746-400b-82ef-35cca567821f-kube-api-access-p2pj9" (OuterVolumeSpecName: "kube-api-access-p2pj9") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "kube-api-access-p2pj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:07.843963 master-0 kubenswrapper[31456]: I0312 21:25:07.843917 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:07.859574 master-0 kubenswrapper[31456]: I0312 21:25:07.859412 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:07.867123 master-0 kubenswrapper[31456]: E0312 21:25:07.867038 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle podName:a795afb6-d746-400b-82ef-35cca567821f nodeName:}" failed. No retries permitted until 2026-03-12 21:25:08.367012965 +0000 UTC m=+969.441618293 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f") : error deleting /var/lib/kubelet/pods/a795afb6-d746-400b-82ef-35cca567821f/volume-subpaths: remove /var/lib/kubelet/pods/a795afb6-d746-400b-82ef-35cca567821f/volume-subpaths: no such file or directory Mar 12 21:25:07.867374 master-0 kubenswrapper[31456]: I0312 21:25:07.867337 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-scripts" (OuterVolumeSpecName: "scripts") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:07.892321 master-0 kubenswrapper[31456]: I0312 21:25:07.892255 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bmrkm" event={"ID":"a795afb6-d746-400b-82ef-35cca567821f","Type":"ContainerDied","Data":"443b1ea0fad05e406d2bcf7d777d93e65745cedb19fb128e200177c09a850898"} Mar 12 21:25:07.892427 master-0 kubenswrapper[31456]: I0312 21:25:07.892323 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443b1ea0fad05e406d2bcf7d777d93e65745cedb19fb128e200177c09a850898" Mar 12 21:25:07.892427 master-0 kubenswrapper[31456]: I0312 21:25:07.892272 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bmrkm" Mar 12 21:25:07.895307 master-0 kubenswrapper[31456]: I0312 21:25:07.895223 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gwk6j" event={"ID":"007bf3d3-2855-42e4-b137-0eaef917bf0b","Type":"ContainerDied","Data":"06ceb82d6394520e026c44ad87a95652af9714d87c361204c9d4a3ed375d22ed"} Mar 12 21:25:07.895307 master-0 kubenswrapper[31456]: I0312 21:25:07.895271 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ceb82d6394520e026c44ad87a95652af9714d87c361204c9d4a3ed375d22ed" Mar 12 21:25:07.895496 master-0 kubenswrapper[31456]: I0312 21:25:07.895368 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gwk6j" Mar 12 21:25:07.944889 master-0 kubenswrapper[31456]: I0312 21:25:07.941300 31456 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a795afb6-d746-400b-82ef-35cca567821f-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.944889 master-0 kubenswrapper[31456]: I0312 21:25:07.941492 31456 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.944889 master-0 kubenswrapper[31456]: I0312 21:25:07.941518 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2pj9\" (UniqueName: \"kubernetes.io/projected/a795afb6-d746-400b-82ef-35cca567821f-kube-api-access-p2pj9\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.944889 master-0 kubenswrapper[31456]: I0312 21:25:07.941534 31456 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.944889 master-0 kubenswrapper[31456]: I0312 21:25:07.941549 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:07.944889 master-0 kubenswrapper[31456]: I0312 21:25:07.941562 31456 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a795afb6-d746-400b-82ef-35cca567821f-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:08.008459 master-0 kubenswrapper[31456]: I0312 21:25:08.008394 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8xlhq"] Mar 12 21:25:08.009107 master-0 kubenswrapper[31456]: E0312 21:25:08.009078 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="007bf3d3-2855-42e4-b137-0eaef917bf0b" containerName="mariadb-account-create-update" Mar 12 21:25:08.009107 master-0 kubenswrapper[31456]: I0312 21:25:08.009106 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="007bf3d3-2855-42e4-b137-0eaef917bf0b" containerName="mariadb-account-create-update" Mar 12 21:25:08.009238 master-0 kubenswrapper[31456]: E0312 21:25:08.009139 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a795afb6-d746-400b-82ef-35cca567821f" containerName="swift-ring-rebalance" Mar 12 21:25:08.009238 master-0 kubenswrapper[31456]: I0312 21:25:08.009149 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="a795afb6-d746-400b-82ef-35cca567821f" containerName="swift-ring-rebalance" Mar 12 21:25:08.010472 master-0 kubenswrapper[31456]: I0312 21:25:08.009466 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a795afb6-d746-400b-82ef-35cca567821f" containerName="swift-ring-rebalance" Mar 12 21:25:08.010472 master-0 kubenswrapper[31456]: I0312 21:25:08.009499 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="007bf3d3-2855-42e4-b137-0eaef917bf0b" containerName="mariadb-account-create-update" Mar 12 21:25:08.010472 master-0 kubenswrapper[31456]: I0312 21:25:08.010398 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.022735 master-0 kubenswrapper[31456]: I0312 21:25:08.022666 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8xlhq"] Mar 12 21:25:08.109580 master-0 kubenswrapper[31456]: I0312 21:25:08.109311 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-98d2-account-create-update-9vmzj"] Mar 12 21:25:08.111708 master-0 kubenswrapper[31456]: I0312 21:25:08.111445 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.113997 master-0 kubenswrapper[31456]: I0312 21:25:08.113941 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 21:25:08.123473 master-0 kubenswrapper[31456]: I0312 21:25:08.121916 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-98d2-account-create-update-9vmzj"] Mar 12 21:25:08.156936 master-0 kubenswrapper[31456]: I0312 21:25:08.154522 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnk9g\" (UniqueName: \"kubernetes.io/projected/622a9f92-1155-4b36-899c-965b404e7137-kube-api-access-vnk9g\") pod \"keystone-db-create-8xlhq\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.156936 master-0 kubenswrapper[31456]: I0312 21:25:08.155051 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/622a9f92-1155-4b36-899c-965b404e7137-operator-scripts\") pod \"keystone-db-create-8xlhq\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.256458 master-0 kubenswrapper[31456]: I0312 21:25:08.256390 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/622a9f92-1155-4b36-899c-965b404e7137-operator-scripts\") pod \"keystone-db-create-8xlhq\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.256458 master-0 kubenswrapper[31456]: I0312 21:25:08.256459 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnk9g\" (UniqueName: \"kubernetes.io/projected/622a9f92-1155-4b36-899c-965b404e7137-kube-api-access-vnk9g\") pod \"keystone-db-create-8xlhq\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.256723 master-0 kubenswrapper[31456]: I0312 21:25:08.256568 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b42p\" (UniqueName: \"kubernetes.io/projected/345e92ee-81d9-4de3-9515-f901d1a3d153-kube-api-access-9b42p\") pod \"keystone-98d2-account-create-update-9vmzj\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.256723 master-0 kubenswrapper[31456]: I0312 21:25:08.256596 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/345e92ee-81d9-4de3-9515-f901d1a3d153-operator-scripts\") pod \"keystone-98d2-account-create-update-9vmzj\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.257323 master-0 kubenswrapper[31456]: I0312 21:25:08.257298 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/622a9f92-1155-4b36-899c-965b404e7137-operator-scripts\") pod \"keystone-db-create-8xlhq\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.274513 master-0 kubenswrapper[31456]: I0312 21:25:08.274459 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnk9g\" (UniqueName: \"kubernetes.io/projected/622a9f92-1155-4b36-899c-965b404e7137-kube-api-access-vnk9g\") pod \"keystone-db-create-8xlhq\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.301770 master-0 kubenswrapper[31456]: I0312 21:25:08.301696 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6a5a-account-create-update-4w5hn"] Mar 12 21:25:08.303419 master-0 kubenswrapper[31456]: I0312 21:25:08.303369 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.306471 master-0 kubenswrapper[31456]: I0312 21:25:08.306421 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 21:25:08.309387 master-0 kubenswrapper[31456]: I0312 21:25:08.309327 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-74dr9"] Mar 12 21:25:08.310715 master-0 kubenswrapper[31456]: I0312 21:25:08.310684 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.328275 master-0 kubenswrapper[31456]: I0312 21:25:08.328211 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-74dr9"] Mar 12 21:25:08.336011 master-0 kubenswrapper[31456]: I0312 21:25:08.335973 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6a5a-account-create-update-4w5hn"] Mar 12 21:25:08.366082 master-0 kubenswrapper[31456]: I0312 21:25:08.357414 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:08.366082 master-0 kubenswrapper[31456]: I0312 21:25:08.358362 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b42p\" (UniqueName: \"kubernetes.io/projected/345e92ee-81d9-4de3-9515-f901d1a3d153-kube-api-access-9b42p\") pod \"keystone-98d2-account-create-update-9vmzj\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.366082 master-0 kubenswrapper[31456]: I0312 21:25:08.358416 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/345e92ee-81d9-4de3-9515-f901d1a3d153-operator-scripts\") pod \"keystone-98d2-account-create-update-9vmzj\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.366082 master-0 kubenswrapper[31456]: I0312 21:25:08.359595 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/345e92ee-81d9-4de3-9515-f901d1a3d153-operator-scripts\") pod \"keystone-98d2-account-create-update-9vmzj\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.375230 master-0 kubenswrapper[31456]: I0312 21:25:08.375131 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b42p\" (UniqueName: \"kubernetes.io/projected/345e92ee-81d9-4de3-9515-f901d1a3d153-kube-api-access-9b42p\") pod \"keystone-98d2-account-create-update-9vmzj\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.440830 master-0 kubenswrapper[31456]: I0312 21:25:08.436803 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:08.460836 master-0 kubenswrapper[31456]: I0312 21:25:08.460245 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle\") pod \"a795afb6-d746-400b-82ef-35cca567821f\" (UID: \"a795afb6-d746-400b-82ef-35cca567821f\") " Mar 12 21:25:08.465655 master-0 kubenswrapper[31456]: I0312 21:25:08.465606 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nc9\" (UniqueName: \"kubernetes.io/projected/3690da76-6dfc-4f32-bb7f-8fb37175b867-kube-api-access-p9nc9\") pod \"placement-db-create-74dr9\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.465980 master-0 kubenswrapper[31456]: I0312 21:25:08.465962 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3690da76-6dfc-4f32-bb7f-8fb37175b867-operator-scripts\") pod \"placement-db-create-74dr9\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.466217 master-0 kubenswrapper[31456]: I0312 21:25:08.466202 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkzc\" (UniqueName: \"kubernetes.io/projected/e5327b01-7167-4072-967c-ea43996b1126-kube-api-access-dbkzc\") pod \"placement-6a5a-account-create-update-4w5hn\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.466372 master-0 kubenswrapper[31456]: I0312 21:25:08.466359 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5327b01-7167-4072-967c-ea43996b1126-operator-scripts\") pod \"placement-6a5a-account-create-update-4w5hn\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.475738 master-0 kubenswrapper[31456]: I0312 21:25:08.475666 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lp9x4"] Mar 12 21:25:08.476162 master-0 kubenswrapper[31456]: I0312 21:25:08.476118 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a795afb6-d746-400b-82ef-35cca567821f" (UID: "a795afb6-d746-400b-82ef-35cca567821f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:08.483572 master-0 kubenswrapper[31456]: I0312 21:25:08.483524 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.494744 master-0 kubenswrapper[31456]: I0312 21:25:08.494707 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lp9x4"] Mar 12 21:25:08.568738 master-0 kubenswrapper[31456]: I0312 21:25:08.568691 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nc9\" (UniqueName: \"kubernetes.io/projected/3690da76-6dfc-4f32-bb7f-8fb37175b867-kube-api-access-p9nc9\") pod \"placement-db-create-74dr9\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.568888 master-0 kubenswrapper[31456]: I0312 21:25:08.568756 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3690da76-6dfc-4f32-bb7f-8fb37175b867-operator-scripts\") pod \"placement-db-create-74dr9\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.568888 master-0 kubenswrapper[31456]: I0312 21:25:08.568874 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkzc\" (UniqueName: \"kubernetes.io/projected/e5327b01-7167-4072-967c-ea43996b1126-kube-api-access-dbkzc\") pod \"placement-6a5a-account-create-update-4w5hn\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.568959 master-0 kubenswrapper[31456]: I0312 21:25:08.568922 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5327b01-7167-4072-967c-ea43996b1126-operator-scripts\") pod \"placement-6a5a-account-create-update-4w5hn\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.568959 master-0 kubenswrapper[31456]: I0312 21:25:08.568949 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqg84\" (UniqueName: \"kubernetes.io/projected/d573798d-d096-47f4-96c7-8b7583a447d9-kube-api-access-gqg84\") pod \"glance-db-create-lp9x4\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.569025 master-0 kubenswrapper[31456]: I0312 21:25:08.568981 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573798d-d096-47f4-96c7-8b7583a447d9-operator-scripts\") pod \"glance-db-create-lp9x4\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.569076 master-0 kubenswrapper[31456]: I0312 21:25:08.569053 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a795afb6-d746-400b-82ef-35cca567821f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:08.569857 master-0 kubenswrapper[31456]: I0312 21:25:08.569829 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3690da76-6dfc-4f32-bb7f-8fb37175b867-operator-scripts\") pod \"placement-db-create-74dr9\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.570390 master-0 kubenswrapper[31456]: I0312 21:25:08.570338 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5327b01-7167-4072-967c-ea43996b1126-operator-scripts\") pod \"placement-6a5a-account-create-update-4w5hn\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.586049 master-0 kubenswrapper[31456]: I0312 21:25:08.586002 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkzc\" (UniqueName: \"kubernetes.io/projected/e5327b01-7167-4072-967c-ea43996b1126-kube-api-access-dbkzc\") pod \"placement-6a5a-account-create-update-4w5hn\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.609954 master-0 kubenswrapper[31456]: I0312 21:25:08.609905 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nc9\" (UniqueName: \"kubernetes.io/projected/3690da76-6dfc-4f32-bb7f-8fb37175b867-kube-api-access-p9nc9\") pod \"placement-db-create-74dr9\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.632413 master-0 kubenswrapper[31456]: I0312 21:25:08.632280 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:08.640897 master-0 kubenswrapper[31456]: I0312 21:25:08.640837 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-74dr9" Mar 12 21:25:08.694073 master-0 kubenswrapper[31456]: I0312 21:25:08.693991 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqg84\" (UniqueName: \"kubernetes.io/projected/d573798d-d096-47f4-96c7-8b7583a447d9-kube-api-access-gqg84\") pod \"glance-db-create-lp9x4\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.694278 master-0 kubenswrapper[31456]: I0312 21:25:08.694100 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573798d-d096-47f4-96c7-8b7583a447d9-operator-scripts\") pod \"glance-db-create-lp9x4\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.695093 master-0 kubenswrapper[31456]: I0312 21:25:08.695050 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573798d-d096-47f4-96c7-8b7583a447d9-operator-scripts\") pod \"glance-db-create-lp9x4\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.715169 master-0 kubenswrapper[31456]: I0312 21:25:08.714192 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqg84\" (UniqueName: \"kubernetes.io/projected/d573798d-d096-47f4-96c7-8b7583a447d9-kube-api-access-gqg84\") pod \"glance-db-create-lp9x4\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.754226 master-0 kubenswrapper[31456]: I0312 21:25:08.754162 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2da3-account-create-update-kpcrn"] Mar 12 21:25:08.761592 master-0 kubenswrapper[31456]: I0312 21:25:08.761540 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:08.764986 master-0 kubenswrapper[31456]: I0312 21:25:08.764927 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 21:25:08.770482 master-0 kubenswrapper[31456]: I0312 21:25:08.770421 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2da3-account-create-update-kpcrn"] Mar 12 21:25:08.798105 master-0 kubenswrapper[31456]: I0312 21:25:08.797080 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:25:08.802376 master-0 kubenswrapper[31456]: I0312 21:25:08.802218 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7478f62f-dba4-43cb-9a5b-556b235bb13f-etc-swift\") pod \"swift-storage-0\" (UID: \"7478f62f-dba4-43cb-9a5b-556b235bb13f\") " pod="openstack/swift-storage-0" Mar 12 21:25:08.875306 master-0 kubenswrapper[31456]: I0312 21:25:08.870315 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:08.889788 master-0 kubenswrapper[31456]: I0312 21:25:08.889738 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 21:25:08.899303 master-0 kubenswrapper[31456]: I0312 21:25:08.899280 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnr7\" (UniqueName: \"kubernetes.io/projected/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-kube-api-access-5gnr7\") pod \"glance-2da3-account-create-update-kpcrn\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:08.899475 master-0 kubenswrapper[31456]: I0312 21:25:08.899456 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-operator-scripts\") pod \"glance-2da3-account-create-update-kpcrn\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:09.003683 master-0 kubenswrapper[31456]: I0312 21:25:09.003436 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gnr7\" (UniqueName: \"kubernetes.io/projected/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-kube-api-access-5gnr7\") pod \"glance-2da3-account-create-update-kpcrn\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:09.003683 master-0 kubenswrapper[31456]: I0312 21:25:09.003480 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-operator-scripts\") pod \"glance-2da3-account-create-update-kpcrn\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:09.004349 master-0 kubenswrapper[31456]: I0312 21:25:09.004305 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-operator-scripts\") pod \"glance-2da3-account-create-update-kpcrn\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:09.037852 master-0 kubenswrapper[31456]: I0312 21:25:09.037578 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8xlhq"] Mar 12 21:25:09.061538 master-0 kubenswrapper[31456]: W0312 21:25:09.061079 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod622a9f92_1155_4b36_899c_965b404e7137.slice/crio-f9e6298a64ac835333bc9175e5c57dc1d9189c347b72bf6009cd1810964b1f40 WatchSource:0}: Error finding container f9e6298a64ac835333bc9175e5c57dc1d9189c347b72bf6009cd1810964b1f40: Status 404 returned error can't find the container with id f9e6298a64ac835333bc9175e5c57dc1d9189c347b72bf6009cd1810964b1f40 Mar 12 21:25:09.061538 master-0 kubenswrapper[31456]: I0312 21:25:09.061481 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gnr7\" (UniqueName: \"kubernetes.io/projected/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-kube-api-access-5gnr7\") pod \"glance-2da3-account-create-update-kpcrn\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:09.102418 master-0 kubenswrapper[31456]: I0312 21:25:09.102358 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:09.129915 master-0 kubenswrapper[31456]: I0312 21:25:09.122152 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-98d2-account-create-update-9vmzj"] Mar 12 21:25:09.478993 master-0 kubenswrapper[31456]: I0312 21:25:09.478914 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-74dr9"] Mar 12 21:25:09.490934 master-0 kubenswrapper[31456]: W0312 21:25:09.486284 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd573798d_d096_47f4_96c7_8b7583a447d9.slice/crio-e5510a83a726e52bd8623645172adff36d5929a367ab7b97be49ec434f2ce885 WatchSource:0}: Error finding container e5510a83a726e52bd8623645172adff36d5929a367ab7b97be49ec434f2ce885: Status 404 returned error can't find the container with id e5510a83a726e52bd8623645172adff36d5929a367ab7b97be49ec434f2ce885 Mar 12 21:25:09.624632 master-0 kubenswrapper[31456]: I0312 21:25:09.624579 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6a5a-account-create-update-4w5hn"] Mar 12 21:25:09.654719 master-0 kubenswrapper[31456]: I0312 21:25:09.654618 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lp9x4"] Mar 12 21:25:09.664507 master-0 kubenswrapper[31456]: I0312 21:25:09.664465 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 21:25:09.762201 master-0 kubenswrapper[31456]: W0312 21:25:09.761363 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f1d0bf8_4671_47dd_8f37_0c8b9136fdac.slice/crio-2fa218054e8097d7d054dad705927fcac535aca34576110eff199226a408715b WatchSource:0}: Error finding container 2fa218054e8097d7d054dad705927fcac535aca34576110eff199226a408715b: Status 404 returned error can't find the container with id 2fa218054e8097d7d054dad705927fcac535aca34576110eff199226a408715b Mar 12 21:25:09.766575 master-0 kubenswrapper[31456]: I0312 21:25:09.766540 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2da3-account-create-update-kpcrn"] Mar 12 21:25:09.934439 master-0 kubenswrapper[31456]: I0312 21:25:09.934371 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-74dr9" event={"ID":"3690da76-6dfc-4f32-bb7f-8fb37175b867","Type":"ContainerStarted","Data":"44919fa0849f562bbc65d8f2165b6b39f171185ef2afcafab3eecc68dd8c7946"} Mar 12 21:25:09.936944 master-0 kubenswrapper[31456]: I0312 21:25:09.936897 31456 generic.go:334] "Generic (PLEG): container finished" podID="622a9f92-1155-4b36-899c-965b404e7137" containerID="0fd2349cbdd4661e3a761e69ecf1f97bc6949b388c5278129803d980b30d0aaf" exitCode=0 Mar 12 21:25:09.937080 master-0 kubenswrapper[31456]: I0312 21:25:09.936997 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8xlhq" event={"ID":"622a9f92-1155-4b36-899c-965b404e7137","Type":"ContainerDied","Data":"0fd2349cbdd4661e3a761e69ecf1f97bc6949b388c5278129803d980b30d0aaf"} Mar 12 21:25:09.937080 master-0 kubenswrapper[31456]: I0312 21:25:09.937031 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8xlhq" event={"ID":"622a9f92-1155-4b36-899c-965b404e7137","Type":"ContainerStarted","Data":"f9e6298a64ac835333bc9175e5c57dc1d9189c347b72bf6009cd1810964b1f40"} Mar 12 21:25:09.939886 master-0 kubenswrapper[31456]: I0312 21:25:09.939780 31456 generic.go:334] "Generic (PLEG): container finished" podID="345e92ee-81d9-4de3-9515-f901d1a3d153" containerID="436771826eb4c47061b96fe6ffe53f5f6aff148cb6dd111eeac742d88f7330d0" exitCode=0 Mar 12 21:25:09.940229 master-0 kubenswrapper[31456]: I0312 21:25:09.940004 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-98d2-account-create-update-9vmzj" event={"ID":"345e92ee-81d9-4de3-9515-f901d1a3d153","Type":"ContainerDied","Data":"436771826eb4c47061b96fe6ffe53f5f6aff148cb6dd111eeac742d88f7330d0"} Mar 12 21:25:09.940470 master-0 kubenswrapper[31456]: I0312 21:25:09.940407 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-98d2-account-create-update-9vmzj" event={"ID":"345e92ee-81d9-4de3-9515-f901d1a3d153","Type":"ContainerStarted","Data":"093940ca77adf711f8b6ecd5316ff29f907b79cfbae60f1c1d8962d41ff1e047"} Mar 12 21:25:09.941619 master-0 kubenswrapper[31456]: I0312 21:25:09.941576 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"66a172a4e0ba78e3b6b2895d9d0f5cc1d41ee67685b524a481104673352bf670"} Mar 12 21:25:09.945042 master-0 kubenswrapper[31456]: I0312 21:25:09.944910 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2da3-account-create-update-kpcrn" event={"ID":"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac","Type":"ContainerStarted","Data":"2a1d29e625a455a849f5f44af2128ef48040409183e58affc5f561b04d932fbe"} Mar 12 21:25:09.945042 master-0 kubenswrapper[31456]: I0312 21:25:09.944958 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2da3-account-create-update-kpcrn" event={"ID":"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac","Type":"ContainerStarted","Data":"2fa218054e8097d7d054dad705927fcac535aca34576110eff199226a408715b"} Mar 12 21:25:09.970838 master-0 kubenswrapper[31456]: I0312 21:25:09.948014 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6a5a-account-create-update-4w5hn" event={"ID":"e5327b01-7167-4072-967c-ea43996b1126","Type":"ContainerStarted","Data":"289f6c22c44f3bfb607d4ae42003a648f00496fd7d0c57f88273cd998b7aabbf"} Mar 12 21:25:09.970838 master-0 kubenswrapper[31456]: I0312 21:25:09.949471 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lp9x4" event={"ID":"d573798d-d096-47f4-96c7-8b7583a447d9","Type":"ContainerStarted","Data":"e5510a83a726e52bd8623645172adff36d5929a367ab7b97be49ec434f2ce885"} Mar 12 21:25:09.993803 master-0 kubenswrapper[31456]: I0312 21:25:09.993614 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2da3-account-create-update-kpcrn" podStartSLOduration=1.9935832740000001 podStartE2EDuration="1.993583274s" podCreationTimestamp="2026-03-12 21:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:25:09.973543199 +0000 UTC m=+971.048148527" watchObservedRunningTime="2026-03-12 21:25:09.993583274 +0000 UTC m=+971.068188602" Mar 12 21:25:10.489079 master-0 kubenswrapper[31456]: I0312 21:25:10.486864 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gwk6j"] Mar 12 21:25:10.500252 master-0 kubenswrapper[31456]: I0312 21:25:10.498413 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gwk6j"] Mar 12 21:25:10.968239 master-0 kubenswrapper[31456]: I0312 21:25:10.968201 31456 generic.go:334] "Generic (PLEG): container finished" podID="5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" containerID="2a1d29e625a455a849f5f44af2128ef48040409183e58affc5f561b04d932fbe" exitCode=0 Mar 12 21:25:10.968690 master-0 kubenswrapper[31456]: I0312 21:25:10.968286 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2da3-account-create-update-kpcrn" event={"ID":"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac","Type":"ContainerDied","Data":"2a1d29e625a455a849f5f44af2128ef48040409183e58affc5f561b04d932fbe"} Mar 12 21:25:10.972253 master-0 kubenswrapper[31456]: I0312 21:25:10.972178 31456 generic.go:334] "Generic (PLEG): container finished" podID="e5327b01-7167-4072-967c-ea43996b1126" containerID="3cb8519dfb833b88250e694e34022a9d89b58497447e2f2b8b5af44503d2211d" exitCode=0 Mar 12 21:25:10.972312 master-0 kubenswrapper[31456]: I0312 21:25:10.972267 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6a5a-account-create-update-4w5hn" event={"ID":"e5327b01-7167-4072-967c-ea43996b1126","Type":"ContainerDied","Data":"3cb8519dfb833b88250e694e34022a9d89b58497447e2f2b8b5af44503d2211d"} Mar 12 21:25:10.974391 master-0 kubenswrapper[31456]: I0312 21:25:10.974372 31456 generic.go:334] "Generic (PLEG): container finished" podID="d573798d-d096-47f4-96c7-8b7583a447d9" containerID="fa0f1c5c5a003d8e76d2299441db75e0fb7c3826893c3b310ec3fd7a7d0b6c58" exitCode=0 Mar 12 21:25:10.974500 master-0 kubenswrapper[31456]: I0312 21:25:10.974482 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lp9x4" event={"ID":"d573798d-d096-47f4-96c7-8b7583a447d9","Type":"ContainerDied","Data":"fa0f1c5c5a003d8e76d2299441db75e0fb7c3826893c3b310ec3fd7a7d0b6c58"} Mar 12 21:25:10.976035 master-0 kubenswrapper[31456]: I0312 21:25:10.976019 31456 generic.go:334] "Generic (PLEG): container finished" podID="3690da76-6dfc-4f32-bb7f-8fb37175b867" containerID="4bcbd62e729b9826a2f3cab447b9ce5bd8f4cd03d061634e742175bfe5cd8361" exitCode=0 Mar 12 21:25:10.976913 master-0 kubenswrapper[31456]: I0312 21:25:10.976257 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-74dr9" event={"ID":"3690da76-6dfc-4f32-bb7f-8fb37175b867","Type":"ContainerDied","Data":"4bcbd62e729b9826a2f3cab447b9ce5bd8f4cd03d061634e742175bfe5cd8361"} Mar 12 21:25:11.195054 master-0 kubenswrapper[31456]: I0312 21:25:11.194736 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="007bf3d3-2855-42e4-b137-0eaef917bf0b" path="/var/lib/kubelet/pods/007bf3d3-2855-42e4-b137-0eaef917bf0b/volumes" Mar 12 21:25:11.459841 master-0 kubenswrapper[31456]: I0312 21:25:11.459790 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:11.557850 master-0 kubenswrapper[31456]: I0312 21:25:11.557758 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnk9g\" (UniqueName: \"kubernetes.io/projected/622a9f92-1155-4b36-899c-965b404e7137-kube-api-access-vnk9g\") pod \"622a9f92-1155-4b36-899c-965b404e7137\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " Mar 12 21:25:11.558043 master-0 kubenswrapper[31456]: I0312 21:25:11.558013 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/622a9f92-1155-4b36-899c-965b404e7137-operator-scripts\") pod \"622a9f92-1155-4b36-899c-965b404e7137\" (UID: \"622a9f92-1155-4b36-899c-965b404e7137\") " Mar 12 21:25:11.559002 master-0 kubenswrapper[31456]: I0312 21:25:11.558964 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622a9f92-1155-4b36-899c-965b404e7137-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "622a9f92-1155-4b36-899c-965b404e7137" (UID: "622a9f92-1155-4b36-899c-965b404e7137"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:11.561146 master-0 kubenswrapper[31456]: I0312 21:25:11.561109 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622a9f92-1155-4b36-899c-965b404e7137-kube-api-access-vnk9g" (OuterVolumeSpecName: "kube-api-access-vnk9g") pod "622a9f92-1155-4b36-899c-965b404e7137" (UID: "622a9f92-1155-4b36-899c-965b404e7137"). InnerVolumeSpecName "kube-api-access-vnk9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:11.660791 master-0 kubenswrapper[31456]: I0312 21:25:11.660630 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnk9g\" (UniqueName: \"kubernetes.io/projected/622a9f92-1155-4b36-899c-965b404e7137-kube-api-access-vnk9g\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:11.660791 master-0 kubenswrapper[31456]: I0312 21:25:11.660673 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/622a9f92-1155-4b36-899c-965b404e7137-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:11.733211 master-0 kubenswrapper[31456]: I0312 21:25:11.731487 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:11.862721 master-0 kubenswrapper[31456]: I0312 21:25:11.862505 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/345e92ee-81d9-4de3-9515-f901d1a3d153-operator-scripts\") pod \"345e92ee-81d9-4de3-9515-f901d1a3d153\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " Mar 12 21:25:11.863066 master-0 kubenswrapper[31456]: I0312 21:25:11.862794 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b42p\" (UniqueName: \"kubernetes.io/projected/345e92ee-81d9-4de3-9515-f901d1a3d153-kube-api-access-9b42p\") pod \"345e92ee-81d9-4de3-9515-f901d1a3d153\" (UID: \"345e92ee-81d9-4de3-9515-f901d1a3d153\") " Mar 12 21:25:11.863066 master-0 kubenswrapper[31456]: I0312 21:25:11.863014 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/345e92ee-81d9-4de3-9515-f901d1a3d153-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "345e92ee-81d9-4de3-9515-f901d1a3d153" (UID: "345e92ee-81d9-4de3-9515-f901d1a3d153"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:11.863717 master-0 kubenswrapper[31456]: I0312 21:25:11.863656 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/345e92ee-81d9-4de3-9515-f901d1a3d153-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:11.865577 master-0 kubenswrapper[31456]: I0312 21:25:11.865502 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345e92ee-81d9-4de3-9515-f901d1a3d153-kube-api-access-9b42p" (OuterVolumeSpecName: "kube-api-access-9b42p") pod "345e92ee-81d9-4de3-9515-f901d1a3d153" (UID: "345e92ee-81d9-4de3-9515-f901d1a3d153"). InnerVolumeSpecName "kube-api-access-9b42p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:11.966212 master-0 kubenswrapper[31456]: I0312 21:25:11.966133 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b42p\" (UniqueName: \"kubernetes.io/projected/345e92ee-81d9-4de3-9515-f901d1a3d153-kube-api-access-9b42p\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:11.993896 master-0 kubenswrapper[31456]: I0312 21:25:11.993783 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8xlhq" Mar 12 21:25:11.994692 master-0 kubenswrapper[31456]: I0312 21:25:11.993889 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8xlhq" event={"ID":"622a9f92-1155-4b36-899c-965b404e7137","Type":"ContainerDied","Data":"f9e6298a64ac835333bc9175e5c57dc1d9189c347b72bf6009cd1810964b1f40"} Mar 12 21:25:11.994692 master-0 kubenswrapper[31456]: I0312 21:25:11.993963 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e6298a64ac835333bc9175e5c57dc1d9189c347b72bf6009cd1810964b1f40" Mar 12 21:25:11.999881 master-0 kubenswrapper[31456]: I0312 21:25:11.999817 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-98d2-account-create-update-9vmzj" event={"ID":"345e92ee-81d9-4de3-9515-f901d1a3d153","Type":"ContainerDied","Data":"093940ca77adf711f8b6ecd5316ff29f907b79cfbae60f1c1d8962d41ff1e047"} Mar 12 21:25:11.999881 master-0 kubenswrapper[31456]: I0312 21:25:11.999867 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="093940ca77adf711f8b6ecd5316ff29f907b79cfbae60f1c1d8962d41ff1e047" Mar 12 21:25:12.000103 master-0 kubenswrapper[31456]: I0312 21:25:11.999923 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-98d2-account-create-update-9vmzj" Mar 12 21:25:12.007227 master-0 kubenswrapper[31456]: I0312 21:25:12.007137 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"15e7d9b059f3d1dce4daa3aeac63c1f70b6d4d54a49a34a4ca431c4046f147eb"} Mar 12 21:25:12.007227 master-0 kubenswrapper[31456]: I0312 21:25:12.007210 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"7982d0fc10f8e8c54e8b7a5bc05c66f7284f668d36867ea0bd51026bc8770637"} Mar 12 21:25:12.007227 master-0 kubenswrapper[31456]: I0312 21:25:12.007221 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"c32d52728ea434db57698f74b3b476c86c498d1ec79f608b6845fa5c8eacaa7c"} Mar 12 21:25:12.007227 master-0 kubenswrapper[31456]: I0312 21:25:12.007230 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"3b5eed9f4c21b48535e487dbb4d937fd374fe40e23c3ccda8855f61841bf808e"} Mar 12 21:25:12.641854 master-0 kubenswrapper[31456]: I0312 21:25:12.641792 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:12.783220 master-0 kubenswrapper[31456]: I0312 21:25:12.782706 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-operator-scripts\") pod \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " Mar 12 21:25:12.783220 master-0 kubenswrapper[31456]: I0312 21:25:12.782774 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gnr7\" (UniqueName: \"kubernetes.io/projected/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-kube-api-access-5gnr7\") pod \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\" (UID: \"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac\") " Mar 12 21:25:12.814904 master-0 kubenswrapper[31456]: I0312 21:25:12.808348 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" (UID: "5f1d0bf8-4671-47dd-8f37-0c8b9136fdac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:12.867386 master-0 kubenswrapper[31456]: I0312 21:25:12.860848 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-kube-api-access-5gnr7" (OuterVolumeSpecName: "kube-api-access-5gnr7") pod "5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" (UID: "5f1d0bf8-4671-47dd-8f37-0c8b9136fdac"). InnerVolumeSpecName "kube-api-access-5gnr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:12.895839 master-0 kubenswrapper[31456]: I0312 21:25:12.892156 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:12.895839 master-0 kubenswrapper[31456]: I0312 21:25:12.892203 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gnr7\" (UniqueName: \"kubernetes.io/projected/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac-kube-api-access-5gnr7\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:13.020948 master-0 kubenswrapper[31456]: I0312 21:25:13.020765 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-b7rpf" podUID="2fb848ef-b2bf-429a-a01f-53240dc3bd0a" containerName="ovn-controller" probeResult="failure" output=< Mar 12 21:25:13.020948 master-0 kubenswrapper[31456]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 21:25:13.020948 master-0 kubenswrapper[31456]: > Mar 12 21:25:13.022212 master-0 kubenswrapper[31456]: I0312 21:25:13.022144 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2da3-account-create-update-kpcrn" event={"ID":"5f1d0bf8-4671-47dd-8f37-0c8b9136fdac","Type":"ContainerDied","Data":"2fa218054e8097d7d054dad705927fcac535aca34576110eff199226a408715b"} Mar 12 21:25:13.022212 master-0 kubenswrapper[31456]: I0312 21:25:13.022211 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa218054e8097d7d054dad705927fcac535aca34576110eff199226a408715b" Mar 12 21:25:13.022621 master-0 kubenswrapper[31456]: I0312 21:25:13.022323 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2da3-account-create-update-kpcrn" Mar 12 21:25:13.179617 master-0 kubenswrapper[31456]: I0312 21:25:13.179566 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:13.193095 master-0 kubenswrapper[31456]: I0312 21:25:13.189779 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-74dr9" Mar 12 21:25:13.212604 master-0 kubenswrapper[31456]: I0312 21:25:13.212503 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:13.311542 master-0 kubenswrapper[31456]: I0312 21:25:13.311495 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5327b01-7167-4072-967c-ea43996b1126-operator-scripts\") pod \"e5327b01-7167-4072-967c-ea43996b1126\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " Mar 12 21:25:13.311698 master-0 kubenswrapper[31456]: I0312 21:25:13.311672 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3690da76-6dfc-4f32-bb7f-8fb37175b867-operator-scripts\") pod \"3690da76-6dfc-4f32-bb7f-8fb37175b867\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " Mar 12 21:25:13.311763 master-0 kubenswrapper[31456]: I0312 21:25:13.311745 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573798d-d096-47f4-96c7-8b7583a447d9-operator-scripts\") pod \"d573798d-d096-47f4-96c7-8b7583a447d9\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " Mar 12 21:25:13.311800 master-0 kubenswrapper[31456]: I0312 21:25:13.311780 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkzc\" (UniqueName: \"kubernetes.io/projected/e5327b01-7167-4072-967c-ea43996b1126-kube-api-access-dbkzc\") pod \"e5327b01-7167-4072-967c-ea43996b1126\" (UID: \"e5327b01-7167-4072-967c-ea43996b1126\") " Mar 12 21:25:13.311884 master-0 kubenswrapper[31456]: I0312 21:25:13.311820 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqg84\" (UniqueName: \"kubernetes.io/projected/d573798d-d096-47f4-96c7-8b7583a447d9-kube-api-access-gqg84\") pod \"d573798d-d096-47f4-96c7-8b7583a447d9\" (UID: \"d573798d-d096-47f4-96c7-8b7583a447d9\") " Mar 12 21:25:13.311884 master-0 kubenswrapper[31456]: I0312 21:25:13.311846 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9nc9\" (UniqueName: \"kubernetes.io/projected/3690da76-6dfc-4f32-bb7f-8fb37175b867-kube-api-access-p9nc9\") pod \"3690da76-6dfc-4f32-bb7f-8fb37175b867\" (UID: \"3690da76-6dfc-4f32-bb7f-8fb37175b867\") " Mar 12 21:25:13.313005 master-0 kubenswrapper[31456]: I0312 21:25:13.312971 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5327b01-7167-4072-967c-ea43996b1126-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5327b01-7167-4072-967c-ea43996b1126" (UID: "e5327b01-7167-4072-967c-ea43996b1126"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:13.313340 master-0 kubenswrapper[31456]: I0312 21:25:13.313310 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3690da76-6dfc-4f32-bb7f-8fb37175b867-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3690da76-6dfc-4f32-bb7f-8fb37175b867" (UID: "3690da76-6dfc-4f32-bb7f-8fb37175b867"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:13.314609 master-0 kubenswrapper[31456]: I0312 21:25:13.313677 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573798d-d096-47f4-96c7-8b7583a447d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d573798d-d096-47f4-96c7-8b7583a447d9" (UID: "d573798d-d096-47f4-96c7-8b7583a447d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:13.316369 master-0 kubenswrapper[31456]: I0312 21:25:13.316326 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5327b01-7167-4072-967c-ea43996b1126-kube-api-access-dbkzc" (OuterVolumeSpecName: "kube-api-access-dbkzc") pod "e5327b01-7167-4072-967c-ea43996b1126" (UID: "e5327b01-7167-4072-967c-ea43996b1126"). InnerVolumeSpecName "kube-api-access-dbkzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:13.316713 master-0 kubenswrapper[31456]: I0312 21:25:13.316678 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3690da76-6dfc-4f32-bb7f-8fb37175b867-kube-api-access-p9nc9" (OuterVolumeSpecName: "kube-api-access-p9nc9") pod "3690da76-6dfc-4f32-bb7f-8fb37175b867" (UID: "3690da76-6dfc-4f32-bb7f-8fb37175b867"). InnerVolumeSpecName "kube-api-access-p9nc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:13.317160 master-0 kubenswrapper[31456]: I0312 21:25:13.317107 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d573798d-d096-47f4-96c7-8b7583a447d9-kube-api-access-gqg84" (OuterVolumeSpecName: "kube-api-access-gqg84") pod "d573798d-d096-47f4-96c7-8b7583a447d9" (UID: "d573798d-d096-47f4-96c7-8b7583a447d9"). InnerVolumeSpecName "kube-api-access-gqg84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:13.414234 master-0 kubenswrapper[31456]: I0312 21:25:13.414178 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5327b01-7167-4072-967c-ea43996b1126-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:13.414234 master-0 kubenswrapper[31456]: I0312 21:25:13.414218 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3690da76-6dfc-4f32-bb7f-8fb37175b867-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:13.414234 master-0 kubenswrapper[31456]: I0312 21:25:13.414231 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573798d-d096-47f4-96c7-8b7583a447d9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:13.414234 master-0 kubenswrapper[31456]: I0312 21:25:13.414242 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkzc\" (UniqueName: \"kubernetes.io/projected/e5327b01-7167-4072-967c-ea43996b1126-kube-api-access-dbkzc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:13.414484 master-0 kubenswrapper[31456]: I0312 21:25:13.414253 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqg84\" (UniqueName: \"kubernetes.io/projected/d573798d-d096-47f4-96c7-8b7583a447d9-kube-api-access-gqg84\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:13.414484 master-0 kubenswrapper[31456]: I0312 21:25:13.414261 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9nc9\" (UniqueName: \"kubernetes.io/projected/3690da76-6dfc-4f32-bb7f-8fb37175b867-kube-api-access-p9nc9\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:14.042650 master-0 kubenswrapper[31456]: I0312 21:25:14.042592 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"ececfffb159734c9b279c3062510b740f9ade90b8378441262077a9b0ee993b8"} Mar 12 21:25:14.042650 master-0 kubenswrapper[31456]: I0312 21:25:14.042653 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"86a27147a861d7a8dda901648d0f29f90e5ec4e5d3e508c7be10cea38fc389e3"} Mar 12 21:25:14.043353 master-0 kubenswrapper[31456]: I0312 21:25:14.042669 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"a9efebc60e1690b119dca84426f86f1bd320fdb47080fc9ddca66fa79d4a0468"} Mar 12 21:25:14.043353 master-0 kubenswrapper[31456]: I0312 21:25:14.042681 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"96389f57707daf5f99623acee1517bafd416cfb6b1f55fce629bd00b7925417a"} Mar 12 21:25:14.045626 master-0 kubenswrapper[31456]: I0312 21:25:14.045193 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6a5a-account-create-update-4w5hn" event={"ID":"e5327b01-7167-4072-967c-ea43996b1126","Type":"ContainerDied","Data":"289f6c22c44f3bfb607d4ae42003a648f00496fd7d0c57f88273cd998b7aabbf"} Mar 12 21:25:14.045626 master-0 kubenswrapper[31456]: I0312 21:25:14.045224 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289f6c22c44f3bfb607d4ae42003a648f00496fd7d0c57f88273cd998b7aabbf" Mar 12 21:25:14.045626 master-0 kubenswrapper[31456]: I0312 21:25:14.045276 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6a5a-account-create-update-4w5hn" Mar 12 21:25:14.048265 master-0 kubenswrapper[31456]: I0312 21:25:14.048245 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lp9x4" event={"ID":"d573798d-d096-47f4-96c7-8b7583a447d9","Type":"ContainerDied","Data":"e5510a83a726e52bd8623645172adff36d5929a367ab7b97be49ec434f2ce885"} Mar 12 21:25:14.048401 master-0 kubenswrapper[31456]: I0312 21:25:14.048385 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5510a83a726e52bd8623645172adff36d5929a367ab7b97be49ec434f2ce885" Mar 12 21:25:14.048513 master-0 kubenswrapper[31456]: I0312 21:25:14.048501 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lp9x4" Mar 12 21:25:14.055879 master-0 kubenswrapper[31456]: I0312 21:25:14.054382 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-74dr9" event={"ID":"3690da76-6dfc-4f32-bb7f-8fb37175b867","Type":"ContainerDied","Data":"44919fa0849f562bbc65d8f2165b6b39f171185ef2afcafab3eecc68dd8c7946"} Mar 12 21:25:14.055879 master-0 kubenswrapper[31456]: I0312 21:25:14.054431 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44919fa0849f562bbc65d8f2165b6b39f171185ef2afcafab3eecc68dd8c7946" Mar 12 21:25:14.055879 master-0 kubenswrapper[31456]: I0312 21:25:14.054527 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-74dr9" Mar 12 21:25:14.538338 master-0 kubenswrapper[31456]: I0312 21:25:14.538271 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 21:25:15.494935 master-0 kubenswrapper[31456]: I0312 21:25:15.494851 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hmlwd"] Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: E0312 21:25:15.495277 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573798d-d096-47f4-96c7-8b7583a447d9" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495294 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573798d-d096-47f4-96c7-8b7583a447d9" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: E0312 21:25:15.495328 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622a9f92-1155-4b36-899c-965b404e7137" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495334 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="622a9f92-1155-4b36-899c-965b404e7137" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: E0312 21:25:15.495353 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345e92ee-81d9-4de3-9515-f901d1a3d153" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495359 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="345e92ee-81d9-4de3-9515-f901d1a3d153" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: E0312 21:25:15.495369 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3690da76-6dfc-4f32-bb7f-8fb37175b867" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495374 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3690da76-6dfc-4f32-bb7f-8fb37175b867" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: E0312 21:25:15.495387 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5327b01-7167-4072-967c-ea43996b1126" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495393 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5327b01-7167-4072-967c-ea43996b1126" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: E0312 21:25:15.495417 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495425 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495766 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="345e92ee-81d9-4de3-9515-f901d1a3d153" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495836 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="3690da76-6dfc-4f32-bb7f-8fb37175b867" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495862 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495881 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5327b01-7167-4072-967c-ea43996b1126" containerName="mariadb-account-create-update" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495903 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="622a9f92-1155-4b36-899c-965b404e7137" containerName="mariadb-database-create" Mar 12 21:25:15.496432 master-0 kubenswrapper[31456]: I0312 21:25:15.495921 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d573798d-d096-47f4-96c7-8b7583a447d9" containerName="mariadb-database-create" Mar 12 21:25:15.497849 master-0 kubenswrapper[31456]: I0312 21:25:15.496689 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.498852 master-0 kubenswrapper[31456]: I0312 21:25:15.498357 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 21:25:15.509686 master-0 kubenswrapper[31456]: I0312 21:25:15.509613 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hmlwd"] Mar 12 21:25:15.679179 master-0 kubenswrapper[31456]: I0312 21:25:15.679129 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcc9\" (UniqueName: \"kubernetes.io/projected/90f78702-fbdb-480e-b0bc-88f60ea0e980-kube-api-access-lhcc9\") pod \"root-account-create-update-hmlwd\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.679566 master-0 kubenswrapper[31456]: I0312 21:25:15.679520 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f78702-fbdb-480e-b0bc-88f60ea0e980-operator-scripts\") pod \"root-account-create-update-hmlwd\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.782317 master-0 kubenswrapper[31456]: I0312 21:25:15.782251 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhcc9\" (UniqueName: \"kubernetes.io/projected/90f78702-fbdb-480e-b0bc-88f60ea0e980-kube-api-access-lhcc9\") pod \"root-account-create-update-hmlwd\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.782531 master-0 kubenswrapper[31456]: I0312 21:25:15.782457 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f78702-fbdb-480e-b0bc-88f60ea0e980-operator-scripts\") pod \"root-account-create-update-hmlwd\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.783945 master-0 kubenswrapper[31456]: I0312 21:25:15.783907 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f78702-fbdb-480e-b0bc-88f60ea0e980-operator-scripts\") pod \"root-account-create-update-hmlwd\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.808191 master-0 kubenswrapper[31456]: I0312 21:25:15.808144 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhcc9\" (UniqueName: \"kubernetes.io/projected/90f78702-fbdb-480e-b0bc-88f60ea0e980-kube-api-access-lhcc9\") pod \"root-account-create-update-hmlwd\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:15.809721 master-0 kubenswrapper[31456]: I0312 21:25:15.809684 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:16.077349 master-0 kubenswrapper[31456]: I0312 21:25:16.077060 31456 generic.go:334] "Generic (PLEG): container finished" podID="8e067175-5771-473f-85a8-af63a27ee30a" containerID="9d805d9cfa171ac267ac91c92953f65d67a09b02c36ab5bd6e12b268be8b9570" exitCode=0 Mar 12 21:25:16.077349 master-0 kubenswrapper[31456]: I0312 21:25:16.077160 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8e067175-5771-473f-85a8-af63a27ee30a","Type":"ContainerDied","Data":"9d805d9cfa171ac267ac91c92953f65d67a09b02c36ab5bd6e12b268be8b9570"} Mar 12 21:25:16.088960 master-0 kubenswrapper[31456]: I0312 21:25:16.086562 31456 generic.go:334] "Generic (PLEG): container finished" podID="1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc" containerID="138616615e61013d25931cad9e2a90c68377bb0c69c117792e8205ee9678e246" exitCode=0 Mar 12 21:25:16.088960 master-0 kubenswrapper[31456]: I0312 21:25:16.086639 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc","Type":"ContainerDied","Data":"138616615e61013d25931cad9e2a90c68377bb0c69c117792e8205ee9678e246"} Mar 12 21:25:16.102924 master-0 kubenswrapper[31456]: I0312 21:25:16.102761 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"2f79431d72e4420072536645c699661f96791ac41af7cd667e0a65626622ce58"} Mar 12 21:25:16.102924 master-0 kubenswrapper[31456]: I0312 21:25:16.102860 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"337b92b7dd4e9ee098279e80901ac9187770f0ad94fb78dab06d36bd28cea0fb"} Mar 12 21:25:16.102924 master-0 kubenswrapper[31456]: I0312 21:25:16.102873 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"83423660dc66d7eb990fdf2e3c46dd73c5a6660f85c9d550a519bedcd2275617"} Mar 12 21:25:16.401667 master-0 kubenswrapper[31456]: I0312 21:25:16.401615 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hmlwd"] Mar 12 21:25:17.122126 master-0 kubenswrapper[31456]: I0312 21:25:17.122048 31456 generic.go:334] "Generic (PLEG): container finished" podID="90f78702-fbdb-480e-b0bc-88f60ea0e980" containerID="d09033cace82f619daa829511df84f7c468ae7702a5f6ce5677bb8ec138049a9" exitCode=0 Mar 12 21:25:17.122883 master-0 kubenswrapper[31456]: I0312 21:25:17.122156 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlwd" event={"ID":"90f78702-fbdb-480e-b0bc-88f60ea0e980","Type":"ContainerDied","Data":"d09033cace82f619daa829511df84f7c468ae7702a5f6ce5677bb8ec138049a9"} Mar 12 21:25:17.122883 master-0 kubenswrapper[31456]: I0312 21:25:17.122201 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlwd" event={"ID":"90f78702-fbdb-480e-b0bc-88f60ea0e980","Type":"ContainerStarted","Data":"a8a5abadc13396fb6a495230b553d19688138a859bf94e548264784fd0d4f55c"} Mar 12 21:25:17.126485 master-0 kubenswrapper[31456]: I0312 21:25:17.126426 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bd151b8-f0b5-4fbe-8ddb-7fd540c29cbc","Type":"ContainerStarted","Data":"cc8c27cc81584381044e364ea37de9e40ecbe0d5422ee051829570360318af56"} Mar 12 21:25:17.126881 master-0 kubenswrapper[31456]: I0312 21:25:17.126836 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:25:17.138517 master-0 kubenswrapper[31456]: I0312 21:25:17.138458 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"ce74f7a16aece45ae129acab83d426b45a0a1744928620e758d7535037941307"} Mar 12 21:25:17.138517 master-0 kubenswrapper[31456]: I0312 21:25:17.138501 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"3d7f7904b722b757e7881258fdac8f4524b9e14f1638238ff01244e2c26a8d49"} Mar 12 21:25:17.138517 master-0 kubenswrapper[31456]: I0312 21:25:17.138514 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"48fa7f6ec6353cd31f161c0344e2726ab5c0095dc7c6b3c37def65efe6098574"} Mar 12 21:25:17.138517 master-0 kubenswrapper[31456]: I0312 21:25:17.138527 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7478f62f-dba4-43cb-9a5b-556b235bb13f","Type":"ContainerStarted","Data":"25a260e25cc8d78bd43f26110cecf66c0066544a7c68edda77c4d821cd4bd624"} Mar 12 21:25:17.150046 master-0 kubenswrapper[31456]: I0312 21:25:17.149982 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8e067175-5771-473f-85a8-af63a27ee30a","Type":"ContainerStarted","Data":"3e6481c19762899330116d0c8b32c072344200be960cb8734dd2e89fe70b21ce"} Mar 12 21:25:17.150354 master-0 kubenswrapper[31456]: I0312 21:25:17.150303 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 21:25:17.199218 master-0 kubenswrapper[31456]: I0312 21:25:17.199096 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.722401418 podStartE2EDuration="27.199067259s" podCreationTimestamp="2026-03-12 21:24:50 +0000 UTC" firstStartedPulling="2026-03-12 21:25:09.617320228 +0000 UTC m=+970.691925556" lastFinishedPulling="2026-03-12 21:25:15.093986069 +0000 UTC m=+976.168591397" observedRunningTime="2026-03-12 21:25:17.188827701 +0000 UTC m=+978.263433049" watchObservedRunningTime="2026-03-12 21:25:17.199067259 +0000 UTC m=+978.273672587" Mar 12 21:25:17.255344 master-0 kubenswrapper[31456]: I0312 21:25:17.255250 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.721046122 podStartE2EDuration="1m5.255231468s" podCreationTimestamp="2026-03-12 21:24:12 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.495892576 +0000 UTC m=+932.570497904" lastFinishedPulling="2026-03-12 21:24:41.030077902 +0000 UTC m=+942.104683250" observedRunningTime="2026-03-12 21:25:17.21977786 +0000 UTC m=+978.294383188" watchObservedRunningTime="2026-03-12 21:25:17.255231468 +0000 UTC m=+978.329836796" Mar 12 21:25:17.257543 master-0 kubenswrapper[31456]: I0312 21:25:17.257447 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=56.804355628 podStartE2EDuration="1m6.257439311s" podCreationTimestamp="2026-03-12 21:24:11 +0000 UTC" firstStartedPulling="2026-03-12 21:24:31.484741466 +0000 UTC m=+932.559346794" lastFinishedPulling="2026-03-12 21:24:40.937825149 +0000 UTC m=+942.012430477" observedRunningTime="2026-03-12 21:25:17.254247964 +0000 UTC m=+978.328853312" watchObservedRunningTime="2026-03-12 21:25:17.257439311 +0000 UTC m=+978.332044649" Mar 12 21:25:17.538940 master-0 kubenswrapper[31456]: I0312 21:25:17.538776 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5484f4d7-grz9n"] Mar 12 21:25:17.542365 master-0 kubenswrapper[31456]: I0312 21:25:17.540920 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.559885 master-0 kubenswrapper[31456]: I0312 21:25:17.548022 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 21:25:17.586329 master-0 kubenswrapper[31456]: I0312 21:25:17.586081 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5484f4d7-grz9n"] Mar 12 21:25:17.637400 master-0 kubenswrapper[31456]: I0312 21:25:17.637335 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-svc\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.637623 master-0 kubenswrapper[31456]: I0312 21:25:17.637407 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.637623 master-0 kubenswrapper[31456]: I0312 21:25:17.637525 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.637623 master-0 kubenswrapper[31456]: I0312 21:25:17.637591 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.637802 master-0 kubenswrapper[31456]: I0312 21:25:17.637623 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-config\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.637802 master-0 kubenswrapper[31456]: I0312 21:25:17.637672 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvfm\" (UniqueName: \"kubernetes.io/projected/2b554cc7-1556-47ef-8167-8661aa141e10-kube-api-access-sjvfm\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.739750 master-0 kubenswrapper[31456]: I0312 21:25:17.739674 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.740000 master-0 kubenswrapper[31456]: I0312 21:25:17.739795 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.740000 master-0 kubenswrapper[31456]: I0312 21:25:17.739987 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-config\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.740073 master-0 kubenswrapper[31456]: I0312 21:25:17.740046 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvfm\" (UniqueName: \"kubernetes.io/projected/2b554cc7-1556-47ef-8167-8661aa141e10-kube-api-access-sjvfm\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.740165 master-0 kubenswrapper[31456]: I0312 21:25:17.740134 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-svc\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.740607 master-0 kubenswrapper[31456]: I0312 21:25:17.740556 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.741005 master-0 kubenswrapper[31456]: I0312 21:25:17.740971 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-config\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.741085 master-0 kubenswrapper[31456]: I0312 21:25:17.740998 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-swift-storage-0\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.741085 master-0 kubenswrapper[31456]: I0312 21:25:17.741018 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.741629 master-0 kubenswrapper[31456]: I0312 21:25:17.741452 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.741629 master-0 kubenswrapper[31456]: I0312 21:25:17.741548 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-svc\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.755345 master-0 kubenswrapper[31456]: I0312 21:25:17.755302 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvfm\" (UniqueName: \"kubernetes.io/projected/2b554cc7-1556-47ef-8167-8661aa141e10-kube-api-access-sjvfm\") pod \"dnsmasq-dns-7d5484f4d7-grz9n\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:17.865917 master-0 kubenswrapper[31456]: I0312 21:25:17.865837 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:18.082738 master-0 kubenswrapper[31456]: I0312 21:25:18.082657 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-b7rpf" podUID="2fb848ef-b2bf-429a-a01f-53240dc3bd0a" containerName="ovn-controller" probeResult="failure" output=< Mar 12 21:25:18.082738 master-0 kubenswrapper[31456]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 21:25:18.082738 master-0 kubenswrapper[31456]: > Mar 12 21:25:18.088097 master-0 kubenswrapper[31456]: I0312 21:25:18.084223 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:25:18.088097 master-0 kubenswrapper[31456]: I0312 21:25:18.084484 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rdl65" Mar 12 21:25:18.356192 master-0 kubenswrapper[31456]: I0312 21:25:18.354442 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b7rpf-config-lpd5f"] Mar 12 21:25:18.370860 master-0 kubenswrapper[31456]: I0312 21:25:18.357510 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.370860 master-0 kubenswrapper[31456]: I0312 21:25:18.362395 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 21:25:18.429308 master-0 kubenswrapper[31456]: I0312 21:25:18.429139 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5484f4d7-grz9n"] Mar 12 21:25:18.455199 master-0 kubenswrapper[31456]: I0312 21:25:18.454854 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7rpf-config-lpd5f"] Mar 12 21:25:18.467174 master-0 kubenswrapper[31456]: I0312 21:25:18.466829 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run-ovn\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.467174 master-0 kubenswrapper[31456]: I0312 21:25:18.466964 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/51bed24b-3ab1-470b-9d9d-fabfdf633f81-kube-api-access-d5k48\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.468164 master-0 kubenswrapper[31456]: I0312 21:25:18.467218 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-additional-scripts\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.468325 master-0 kubenswrapper[31456]: I0312 21:25:18.468301 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-scripts\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.468453 master-0 kubenswrapper[31456]: I0312 21:25:18.468373 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-log-ovn\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.468573 master-0 kubenswrapper[31456]: I0312 21:25:18.468548 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.570427 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-additional-scripts\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.570532 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-scripts\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.570558 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-log-ovn\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.570617 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.570699 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run-ovn\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.570721 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/51bed24b-3ab1-470b-9d9d-fabfdf633f81-kube-api-access-d5k48\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.574832 master-0 kubenswrapper[31456]: I0312 21:25:18.571887 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-additional-scripts\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.575234 master-0 kubenswrapper[31456]: I0312 21:25:18.574999 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-log-ovn\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.575234 master-0 kubenswrapper[31456]: I0312 21:25:18.575055 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.575234 master-0 kubenswrapper[31456]: I0312 21:25:18.575095 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run-ovn\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.578819 master-0 kubenswrapper[31456]: I0312 21:25:18.576193 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-scripts\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.587316 master-0 kubenswrapper[31456]: I0312 21:25:18.587269 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/51bed24b-3ab1-470b-9d9d-fabfdf633f81-kube-api-access-d5k48\") pod \"ovn-controller-b7rpf-config-lpd5f\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.630450 master-0 kubenswrapper[31456]: I0312 21:25:18.630326 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:18.702830 master-0 kubenswrapper[31456]: I0312 21:25:18.699176 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qsh5p"] Mar 12 21:25:18.702830 master-0 kubenswrapper[31456]: E0312 21:25:18.699742 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f78702-fbdb-480e-b0bc-88f60ea0e980" containerName="mariadb-account-create-update" Mar 12 21:25:18.702830 master-0 kubenswrapper[31456]: I0312 21:25:18.699758 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f78702-fbdb-480e-b0bc-88f60ea0e980" containerName="mariadb-account-create-update" Mar 12 21:25:18.702830 master-0 kubenswrapper[31456]: I0312 21:25:18.700021 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f78702-fbdb-480e-b0bc-88f60ea0e980" containerName="mariadb-account-create-update" Mar 12 21:25:18.702830 master-0 kubenswrapper[31456]: I0312 21:25:18.700708 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:18.712051 master-0 kubenswrapper[31456]: I0312 21:25:18.711696 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-config-data" Mar 12 21:25:18.713821 master-0 kubenswrapper[31456]: I0312 21:25:18.713758 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qsh5p"] Mar 12 21:25:18.774182 master-0 kubenswrapper[31456]: I0312 21:25:18.773652 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:18.780410 master-0 kubenswrapper[31456]: I0312 21:25:18.780344 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhcc9\" (UniqueName: \"kubernetes.io/projected/90f78702-fbdb-480e-b0bc-88f60ea0e980-kube-api-access-lhcc9\") pod \"90f78702-fbdb-480e-b0bc-88f60ea0e980\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " Mar 12 21:25:18.780660 master-0 kubenswrapper[31456]: I0312 21:25:18.780567 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f78702-fbdb-480e-b0bc-88f60ea0e980-operator-scripts\") pod \"90f78702-fbdb-480e-b0bc-88f60ea0e980\" (UID: \"90f78702-fbdb-480e-b0bc-88f60ea0e980\") " Mar 12 21:25:18.782306 master-0 kubenswrapper[31456]: I0312 21:25:18.782263 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f78702-fbdb-480e-b0bc-88f60ea0e980-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90f78702-fbdb-480e-b0bc-88f60ea0e980" (UID: "90f78702-fbdb-480e-b0bc-88f60ea0e980"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:18.784010 master-0 kubenswrapper[31456]: I0312 21:25:18.783974 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f78702-fbdb-480e-b0bc-88f60ea0e980-kube-api-access-lhcc9" (OuterVolumeSpecName: "kube-api-access-lhcc9") pod "90f78702-fbdb-480e-b0bc-88f60ea0e980" (UID: "90f78702-fbdb-480e-b0bc-88f60ea0e980"). InnerVolumeSpecName "kube-api-access-lhcc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:18.918245 master-0 kubenswrapper[31456]: I0312 21:25:18.914066 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqqm7\" (UniqueName: \"kubernetes.io/projected/6b67fa12-637c-4880-b717-d46e768d3112-kube-api-access-xqqm7\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:18.918245 master-0 kubenswrapper[31456]: I0312 21:25:18.914123 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-config-data\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:18.918245 master-0 kubenswrapper[31456]: I0312 21:25:18.914144 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-combined-ca-bundle\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:18.918245 master-0 kubenswrapper[31456]: I0312 21:25:18.914160 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-db-sync-config-data\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:18.918245 master-0 kubenswrapper[31456]: I0312 21:25:18.914319 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90f78702-fbdb-480e-b0bc-88f60ea0e980-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:18.918245 master-0 kubenswrapper[31456]: I0312 21:25:18.914332 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhcc9\" (UniqueName: \"kubernetes.io/projected/90f78702-fbdb-480e-b0bc-88f60ea0e980-kube-api-access-lhcc9\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:19.019389 master-0 kubenswrapper[31456]: I0312 21:25:19.019053 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqqm7\" (UniqueName: \"kubernetes.io/projected/6b67fa12-637c-4880-b717-d46e768d3112-kube-api-access-xqqm7\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.019746 master-0 kubenswrapper[31456]: I0312 21:25:19.019686 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-config-data\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.019881 master-0 kubenswrapper[31456]: I0312 21:25:19.019794 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-combined-ca-bundle\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.019934 master-0 kubenswrapper[31456]: I0312 21:25:19.019905 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-db-sync-config-data\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.025136 master-0 kubenswrapper[31456]: I0312 21:25:19.024631 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-config-data\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.035643 master-0 kubenswrapper[31456]: I0312 21:25:19.025961 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-combined-ca-bundle\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.035643 master-0 kubenswrapper[31456]: I0312 21:25:19.026791 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-db-sync-config-data\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.036213 master-0 kubenswrapper[31456]: I0312 21:25:19.036181 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqqm7\" (UniqueName: \"kubernetes.io/projected/6b67fa12-637c-4880-b717-d46e768d3112-kube-api-access-xqqm7\") pod \"glance-db-sync-qsh5p\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.057564 master-0 kubenswrapper[31456]: I0312 21:25:19.057491 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:19.199687 master-0 kubenswrapper[31456]: I0312 21:25:19.199626 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hmlwd" event={"ID":"90f78702-fbdb-480e-b0bc-88f60ea0e980","Type":"ContainerDied","Data":"a8a5abadc13396fb6a495230b553d19688138a859bf94e548264784fd0d4f55c"} Mar 12 21:25:19.199687 master-0 kubenswrapper[31456]: I0312 21:25:19.199679 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a5abadc13396fb6a495230b553d19688138a859bf94e548264784fd0d4f55c" Mar 12 21:25:19.199888 master-0 kubenswrapper[31456]: I0312 21:25:19.199734 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hmlwd" Mar 12 21:25:19.206299 master-0 kubenswrapper[31456]: I0312 21:25:19.204009 31456 generic.go:334] "Generic (PLEG): container finished" podID="2b554cc7-1556-47ef-8167-8661aa141e10" containerID="9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598" exitCode=0 Mar 12 21:25:19.206450 master-0 kubenswrapper[31456]: I0312 21:25:19.206309 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" event={"ID":"2b554cc7-1556-47ef-8167-8661aa141e10","Type":"ContainerDied","Data":"9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598"} Mar 12 21:25:19.206450 master-0 kubenswrapper[31456]: I0312 21:25:19.206345 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" event={"ID":"2b554cc7-1556-47ef-8167-8661aa141e10","Type":"ContainerStarted","Data":"07b7342fa7c0b23946d5d10a249e15bef4729f4d26d8c4cf6aa02c99dd1515ab"} Mar 12 21:25:19.280910 master-0 kubenswrapper[31456]: I0312 21:25:19.280799 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7rpf-config-lpd5f"] Mar 12 21:25:19.584270 master-0 kubenswrapper[31456]: I0312 21:25:19.583436 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qsh5p"] Mar 12 21:25:19.586764 master-0 kubenswrapper[31456]: W0312 21:25:19.586724 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b67fa12_637c_4880_b717_d46e768d3112.slice/crio-67a909ced2bcec97d6e28ae6fbc96e19fe0e95d1d6236b0523414116e47b75c6 WatchSource:0}: Error finding container 67a909ced2bcec97d6e28ae6fbc96e19fe0e95d1d6236b0523414116e47b75c6: Status 404 returned error can't find the container with id 67a909ced2bcec97d6e28ae6fbc96e19fe0e95d1d6236b0523414116e47b75c6 Mar 12 21:25:20.223143 master-0 kubenswrapper[31456]: I0312 21:25:20.222929 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" event={"ID":"2b554cc7-1556-47ef-8167-8661aa141e10","Type":"ContainerStarted","Data":"0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4"} Mar 12 21:25:20.223143 master-0 kubenswrapper[31456]: I0312 21:25:20.223097 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:20.225606 master-0 kubenswrapper[31456]: I0312 21:25:20.225515 31456 generic.go:334] "Generic (PLEG): container finished" podID="51bed24b-3ab1-470b-9d9d-fabfdf633f81" containerID="88107639b34c604dbd609853ad95e79e0392a97cc72fc2d2498d7c90bc383d59" exitCode=0 Mar 12 21:25:20.225777 master-0 kubenswrapper[31456]: I0312 21:25:20.225738 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-lpd5f" event={"ID":"51bed24b-3ab1-470b-9d9d-fabfdf633f81","Type":"ContainerDied","Data":"88107639b34c604dbd609853ad95e79e0392a97cc72fc2d2498d7c90bc383d59"} Mar 12 21:25:20.225850 master-0 kubenswrapper[31456]: I0312 21:25:20.225783 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-lpd5f" event={"ID":"51bed24b-3ab1-470b-9d9d-fabfdf633f81","Type":"ContainerStarted","Data":"614130fc32980eff65e6e6843b429fcf9e2d3baf3cb4233889ace7b54e068899"} Mar 12 21:25:20.227204 master-0 kubenswrapper[31456]: I0312 21:25:20.227161 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qsh5p" event={"ID":"6b67fa12-637c-4880-b717-d46e768d3112","Type":"ContainerStarted","Data":"67a909ced2bcec97d6e28ae6fbc96e19fe0e95d1d6236b0523414116e47b75c6"} Mar 12 21:25:20.256390 master-0 kubenswrapper[31456]: I0312 21:25:20.256272 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" podStartSLOduration=3.256251511 podStartE2EDuration="3.256251511s" podCreationTimestamp="2026-03-12 21:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:25:20.24296092 +0000 UTC m=+981.317566258" watchObservedRunningTime="2026-03-12 21:25:20.256251511 +0000 UTC m=+981.330856859" Mar 12 21:25:21.727403 master-0 kubenswrapper[31456]: I0312 21:25:21.727336 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:21.919250 master-0 kubenswrapper[31456]: I0312 21:25:21.919119 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-log-ovn\") pod \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " Mar 12 21:25:21.919250 master-0 kubenswrapper[31456]: I0312 21:25:21.919212 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run-ovn\") pod \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " Mar 12 21:25:21.919494 master-0 kubenswrapper[31456]: I0312 21:25:21.919355 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-additional-scripts\") pod \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " Mar 12 21:25:21.919494 master-0 kubenswrapper[31456]: I0312 21:25:21.919362 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "51bed24b-3ab1-470b-9d9d-fabfdf633f81" (UID: "51bed24b-3ab1-470b-9d9d-fabfdf633f81"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:25:21.919494 master-0 kubenswrapper[31456]: I0312 21:25:21.919408 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/51bed24b-3ab1-470b-9d9d-fabfdf633f81-kube-api-access-d5k48\") pod \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " Mar 12 21:25:21.919494 master-0 kubenswrapper[31456]: I0312 21:25:21.919430 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "51bed24b-3ab1-470b-9d9d-fabfdf633f81" (UID: "51bed24b-3ab1-470b-9d9d-fabfdf633f81"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:25:21.919494 master-0 kubenswrapper[31456]: I0312 21:25:21.919451 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run\") pod \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " Mar 12 21:25:21.919668 master-0 kubenswrapper[31456]: I0312 21:25:21.919516 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run" (OuterVolumeSpecName: "var-run") pod "51bed24b-3ab1-470b-9d9d-fabfdf633f81" (UID: "51bed24b-3ab1-470b-9d9d-fabfdf633f81"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:25:21.919668 master-0 kubenswrapper[31456]: I0312 21:25:21.919620 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-scripts\") pod \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\" (UID: \"51bed24b-3ab1-470b-9d9d-fabfdf633f81\") " Mar 12 21:25:21.920992 master-0 kubenswrapper[31456]: I0312 21:25:21.920961 31456 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:21.921078 master-0 kubenswrapper[31456]: I0312 21:25:21.920995 31456 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:21.921078 master-0 kubenswrapper[31456]: I0312 21:25:21.921012 31456 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/51bed24b-3ab1-470b-9d9d-fabfdf633f81-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:21.921078 master-0 kubenswrapper[31456]: I0312 21:25:21.921014 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-scripts" (OuterVolumeSpecName: "scripts") pod "51bed24b-3ab1-470b-9d9d-fabfdf633f81" (UID: "51bed24b-3ab1-470b-9d9d-fabfdf633f81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:21.921192 master-0 kubenswrapper[31456]: I0312 21:25:21.921147 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "51bed24b-3ab1-470b-9d9d-fabfdf633f81" (UID: "51bed24b-3ab1-470b-9d9d-fabfdf633f81"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:21.928236 master-0 kubenswrapper[31456]: I0312 21:25:21.928168 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bed24b-3ab1-470b-9d9d-fabfdf633f81-kube-api-access-d5k48" (OuterVolumeSpecName: "kube-api-access-d5k48") pod "51bed24b-3ab1-470b-9d9d-fabfdf633f81" (UID: "51bed24b-3ab1-470b-9d9d-fabfdf633f81"). InnerVolumeSpecName "kube-api-access-d5k48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:22.022929 master-0 kubenswrapper[31456]: I0312 21:25:22.022844 31456 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:22.022929 master-0 kubenswrapper[31456]: I0312 21:25:22.022896 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5k48\" (UniqueName: \"kubernetes.io/projected/51bed24b-3ab1-470b-9d9d-fabfdf633f81-kube-api-access-d5k48\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:22.022929 master-0 kubenswrapper[31456]: I0312 21:25:22.022912 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bed24b-3ab1-470b-9d9d-fabfdf633f81-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:22.258484 master-0 kubenswrapper[31456]: I0312 21:25:22.258344 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-lpd5f" event={"ID":"51bed24b-3ab1-470b-9d9d-fabfdf633f81","Type":"ContainerDied","Data":"614130fc32980eff65e6e6843b429fcf9e2d3baf3cb4233889ace7b54e068899"} Mar 12 21:25:22.258484 master-0 kubenswrapper[31456]: I0312 21:25:22.258401 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614130fc32980eff65e6e6843b429fcf9e2d3baf3cb4233889ace7b54e068899" Mar 12 21:25:22.258484 master-0 kubenswrapper[31456]: I0312 21:25:22.258433 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-lpd5f" Mar 12 21:25:22.916321 master-0 kubenswrapper[31456]: I0312 21:25:22.916222 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b7rpf-config-lpd5f"] Mar 12 21:25:22.929887 master-0 kubenswrapper[31456]: I0312 21:25:22.929800 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b7rpf-config-lpd5f"] Mar 12 21:25:23.036863 master-0 kubenswrapper[31456]: I0312 21:25:23.036742 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b7rpf-config-98zc7"] Mar 12 21:25:23.038741 master-0 kubenswrapper[31456]: E0312 21:25:23.038723 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bed24b-3ab1-470b-9d9d-fabfdf633f81" containerName="ovn-config" Mar 12 21:25:23.038876 master-0 kubenswrapper[31456]: I0312 21:25:23.038865 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bed24b-3ab1-470b-9d9d-fabfdf633f81" containerName="ovn-config" Mar 12 21:25:23.039315 master-0 kubenswrapper[31456]: I0312 21:25:23.039300 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bed24b-3ab1-470b-9d9d-fabfdf633f81" containerName="ovn-config" Mar 12 21:25:23.043461 master-0 kubenswrapper[31456]: I0312 21:25:23.043444 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.046604 master-0 kubenswrapper[31456]: I0312 21:25:23.046570 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 21:25:23.048669 master-0 kubenswrapper[31456]: I0312 21:25:23.048596 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-log-ovn\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.048936 master-0 kubenswrapper[31456]: I0312 21:25:23.048676 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-scripts\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.048936 master-0 kubenswrapper[31456]: I0312 21:25:23.048730 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvvx\" (UniqueName: \"kubernetes.io/projected/53aabeb1-168b-479a-aff0-b006d94a0650-kube-api-access-fhvvx\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.050447 master-0 kubenswrapper[31456]: I0312 21:25:23.048962 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run-ovn\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.050447 master-0 kubenswrapper[31456]: I0312 21:25:23.049020 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.050447 master-0 kubenswrapper[31456]: I0312 21:25:23.049049 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-additional-scripts\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.058796 master-0 kubenswrapper[31456]: I0312 21:25:23.058689 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-b7rpf" Mar 12 21:25:23.086020 master-0 kubenswrapper[31456]: I0312 21:25:23.084944 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7rpf-config-98zc7"] Mar 12 21:25:23.154998 master-0 kubenswrapper[31456]: I0312 21:25:23.154891 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-log-ovn\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.154998 master-0 kubenswrapper[31456]: I0312 21:25:23.154998 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-scripts\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.155256 master-0 kubenswrapper[31456]: I0312 21:25:23.155036 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvvx\" (UniqueName: \"kubernetes.io/projected/53aabeb1-168b-479a-aff0-b006d94a0650-kube-api-access-fhvvx\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.155256 master-0 kubenswrapper[31456]: I0312 21:25:23.155154 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run-ovn\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.155256 master-0 kubenswrapper[31456]: I0312 21:25:23.155206 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.155256 master-0 kubenswrapper[31456]: I0312 21:25:23.155227 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-additional-scripts\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.156222 master-0 kubenswrapper[31456]: I0312 21:25:23.156182 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-additional-scripts\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.156312 master-0 kubenswrapper[31456]: I0312 21:25:23.156281 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-log-ovn\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.157037 master-0 kubenswrapper[31456]: I0312 21:25:23.156974 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run-ovn\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.157389 master-0 kubenswrapper[31456]: I0312 21:25:23.157365 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.158886 master-0 kubenswrapper[31456]: I0312 21:25:23.158083 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-scripts\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.177509 master-0 kubenswrapper[31456]: I0312 21:25:23.177395 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvvx\" (UniqueName: \"kubernetes.io/projected/53aabeb1-168b-479a-aff0-b006d94a0650-kube-api-access-fhvvx\") pod \"ovn-controller-b7rpf-config-98zc7\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.184361 master-0 kubenswrapper[31456]: I0312 21:25:23.183765 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bed24b-3ab1-470b-9d9d-fabfdf633f81" path="/var/lib/kubelet/pods/51bed24b-3ab1-470b-9d9d-fabfdf633f81/volumes" Mar 12 21:25:23.376862 master-0 kubenswrapper[31456]: I0312 21:25:23.376778 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:23.910734 master-0 kubenswrapper[31456]: I0312 21:25:23.910669 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b7rpf-config-98zc7"] Mar 12 21:25:24.288780 master-0 kubenswrapper[31456]: I0312 21:25:24.288717 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-98zc7" event={"ID":"53aabeb1-168b-479a-aff0-b006d94a0650","Type":"ContainerStarted","Data":"ab9ab84e7a4d103c0a683da471112cc713dcba501122eb13e2ab4f9d139682af"} Mar 12 21:25:24.288780 master-0 kubenswrapper[31456]: I0312 21:25:24.288780 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-98zc7" event={"ID":"53aabeb1-168b-479a-aff0-b006d94a0650","Type":"ContainerStarted","Data":"ad325d67d565e2144efa2a11922dab2617e0d0684891149e1ee6bf54102d3f09"} Mar 12 21:25:25.306769 master-0 kubenswrapper[31456]: I0312 21:25:25.306687 31456 generic.go:334] "Generic (PLEG): container finished" podID="53aabeb1-168b-479a-aff0-b006d94a0650" containerID="ab9ab84e7a4d103c0a683da471112cc713dcba501122eb13e2ab4f9d139682af" exitCode=0 Mar 12 21:25:25.307439 master-0 kubenswrapper[31456]: I0312 21:25:25.306768 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-98zc7" event={"ID":"53aabeb1-168b-479a-aff0-b006d94a0650","Type":"ContainerDied","Data":"ab9ab84e7a4d103c0a683da471112cc713dcba501122eb13e2ab4f9d139682af"} Mar 12 21:25:27.761308 master-0 kubenswrapper[31456]: I0312 21:25:27.761136 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 21:25:27.870985 master-0 kubenswrapper[31456]: I0312 21:25:27.868174 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:25:27.996270 master-0 kubenswrapper[31456]: I0312 21:25:27.995947 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-2xtgl"] Mar 12 21:25:27.996575 master-0 kubenswrapper[31456]: I0312 21:25:27.996517 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="dnsmasq-dns" containerID="cri-o://e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755" gracePeriod=10 Mar 12 21:25:28.499866 master-0 kubenswrapper[31456]: I0312 21:25:28.494921 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pjn56"] Mar 12 21:25:28.507325 master-0 kubenswrapper[31456]: I0312 21:25:28.505147 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.556679 master-0 kubenswrapper[31456]: I0312 21:25:28.556189 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pjn56"] Mar 12 21:25:28.619925 master-0 kubenswrapper[31456]: I0312 21:25:28.619868 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-23e5-account-create-update-qlhcj"] Mar 12 21:25:28.621225 master-0 kubenswrapper[31456]: I0312 21:25:28.621132 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.632888 master-0 kubenswrapper[31456]: I0312 21:25:28.625956 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 21:25:28.643470 master-0 kubenswrapper[31456]: I0312 21:25:28.643385 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-23e5-account-create-update-qlhcj"] Mar 12 21:25:28.666299 master-0 kubenswrapper[31456]: I0312 21:25:28.660302 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-operator-scripts\") pod \"cinder-db-create-pjn56\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.666299 master-0 kubenswrapper[31456]: I0312 21:25:28.660388 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4hq\" (UniqueName: \"kubernetes.io/projected/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-kube-api-access-2c4hq\") pod \"cinder-db-create-pjn56\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.740154 master-0 kubenswrapper[31456]: I0312 21:25:28.730502 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fthjz"] Mar 12 21:25:28.740154 master-0 kubenswrapper[31456]: I0312 21:25:28.731837 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.760479 master-0 kubenswrapper[31456]: I0312 21:25:28.747885 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 21:25:28.767917 master-0 kubenswrapper[31456]: I0312 21:25:28.761398 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 21:25:28.768685 master-0 kubenswrapper[31456]: I0312 21:25:28.763719 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-combined-ca-bundle\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.780460 master-0 kubenswrapper[31456]: I0312 21:25:28.764144 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 21:25:28.780460 master-0 kubenswrapper[31456]: I0312 21:25:28.775151 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fthjz"] Mar 12 21:25:28.780832 master-0 kubenswrapper[31456]: I0312 21:25:28.780773 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjm4h\" (UniqueName: \"kubernetes.io/projected/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-kube-api-access-wjm4h\") pod \"cinder-23e5-account-create-update-qlhcj\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.780979 master-0 kubenswrapper[31456]: I0312 21:25:28.780962 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-operator-scripts\") pod \"cinder-23e5-account-create-update-qlhcj\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.781217 master-0 kubenswrapper[31456]: I0312 21:25:28.781202 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlhc\" (UniqueName: \"kubernetes.io/projected/eb0472a9-9d25-4efe-9032-c8afdc106678-kube-api-access-6qlhc\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.781328 master-0 kubenswrapper[31456]: I0312 21:25:28.781315 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-config-data\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.781649 master-0 kubenswrapper[31456]: I0312 21:25:28.781635 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-operator-scripts\") pod \"cinder-db-create-pjn56\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.781737 master-0 kubenswrapper[31456]: I0312 21:25:28.781725 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c4hq\" (UniqueName: \"kubernetes.io/projected/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-kube-api-access-2c4hq\") pod \"cinder-db-create-pjn56\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.790019 master-0 kubenswrapper[31456]: I0312 21:25:28.782968 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-operator-scripts\") pod \"cinder-db-create-pjn56\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.812039 master-0 kubenswrapper[31456]: I0312 21:25:28.811968 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c4hq\" (UniqueName: \"kubernetes.io/projected/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-kube-api-access-2c4hq\") pod \"cinder-db-create-pjn56\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.824947 master-0 kubenswrapper[31456]: I0312 21:25:28.824852 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ssg44"] Mar 12 21:25:28.835826 master-0 kubenswrapper[31456]: I0312 21:25:28.829513 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:28.859837 master-0 kubenswrapper[31456]: I0312 21:25:28.859259 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ssg44"] Mar 12 21:25:28.880255 master-0 kubenswrapper[31456]: I0312 21:25:28.879243 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:28.884841 master-0 kubenswrapper[31456]: I0312 21:25:28.884760 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-operator-scripts\") pod \"cinder-23e5-account-create-update-qlhcj\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.884880 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c813ae4-0bfc-4a61-b602-9ce03baad036-operator-scripts\") pod \"neutron-db-create-ssg44\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.885771 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qlhc\" (UniqueName: \"kubernetes.io/projected/eb0472a9-9d25-4efe-9032-c8afdc106678-kube-api-access-6qlhc\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.885779 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-operator-scripts\") pod \"cinder-23e5-account-create-update-qlhcj\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.885833 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-config-data\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.885944 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8hc\" (UniqueName: \"kubernetes.io/projected/6c813ae4-0bfc-4a61-b602-9ce03baad036-kube-api-access-zt8hc\") pod \"neutron-db-create-ssg44\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.886027 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-combined-ca-bundle\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.888108 master-0 kubenswrapper[31456]: I0312 21:25:28.886075 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjm4h\" (UniqueName: \"kubernetes.io/projected/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-kube-api-access-wjm4h\") pod \"cinder-23e5-account-create-update-qlhcj\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.903974 master-0 kubenswrapper[31456]: I0312 21:25:28.890652 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-config-data\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.907905 master-0 kubenswrapper[31456]: I0312 21:25:28.905726 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-combined-ca-bundle\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.928849 master-0 kubenswrapper[31456]: I0312 21:25:28.912008 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjm4h\" (UniqueName: \"kubernetes.io/projected/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-kube-api-access-wjm4h\") pod \"cinder-23e5-account-create-update-qlhcj\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.928849 master-0 kubenswrapper[31456]: I0312 21:25:28.915510 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qlhc\" (UniqueName: \"kubernetes.io/projected/eb0472a9-9d25-4efe-9032-c8afdc106678-kube-api-access-6qlhc\") pod \"keystone-db-sync-fthjz\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:28.977636 master-0 kubenswrapper[31456]: I0312 21:25:28.977540 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:28.989998 master-0 kubenswrapper[31456]: I0312 21:25:28.989918 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8hc\" (UniqueName: \"kubernetes.io/projected/6c813ae4-0bfc-4a61-b602-9ce03baad036-kube-api-access-zt8hc\") pod \"neutron-db-create-ssg44\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:28.990145 master-0 kubenswrapper[31456]: I0312 21:25:28.990123 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c813ae4-0bfc-4a61-b602-9ce03baad036-operator-scripts\") pod \"neutron-db-create-ssg44\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:28.991011 master-0 kubenswrapper[31456]: I0312 21:25:28.990975 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c813ae4-0bfc-4a61-b602-9ce03baad036-operator-scripts\") pod \"neutron-db-create-ssg44\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:29.024442 master-0 kubenswrapper[31456]: I0312 21:25:29.024040 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8df6-account-create-update-cmvwn"] Mar 12 21:25:29.029999 master-0 kubenswrapper[31456]: I0312 21:25:29.026657 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.029999 master-0 kubenswrapper[31456]: I0312 21:25:29.029702 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8hc\" (UniqueName: \"kubernetes.io/projected/6c813ae4-0bfc-4a61-b602-9ce03baad036-kube-api-access-zt8hc\") pod \"neutron-db-create-ssg44\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:29.036884 master-0 kubenswrapper[31456]: I0312 21:25:29.034836 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 21:25:29.066763 master-0 kubenswrapper[31456]: I0312 21:25:29.066565 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8df6-account-create-update-cmvwn"] Mar 12 21:25:29.092463 master-0 kubenswrapper[31456]: I0312 21:25:29.091732 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-operator-scripts\") pod \"neutron-8df6-account-create-update-cmvwn\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.092463 master-0 kubenswrapper[31456]: I0312 21:25:29.091824 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkv7n\" (UniqueName: \"kubernetes.io/projected/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-kube-api-access-rkv7n\") pod \"neutron-8df6-account-create-update-cmvwn\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.106923 master-0 kubenswrapper[31456]: I0312 21:25:29.106795 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:29.145698 master-0 kubenswrapper[31456]: I0312 21:25:29.145631 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 21:25:29.174711 master-0 kubenswrapper[31456]: I0312 21:25:29.174655 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:29.198875 master-0 kubenswrapper[31456]: I0312 21:25:29.197352 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-operator-scripts\") pod \"neutron-8df6-account-create-update-cmvwn\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.198875 master-0 kubenswrapper[31456]: I0312 21:25:29.197490 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkv7n\" (UniqueName: \"kubernetes.io/projected/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-kube-api-access-rkv7n\") pod \"neutron-8df6-account-create-update-cmvwn\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.198875 master-0 kubenswrapper[31456]: I0312 21:25:29.198218 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-operator-scripts\") pod \"neutron-8df6-account-create-update-cmvwn\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.256270 master-0 kubenswrapper[31456]: I0312 21:25:29.256182 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkv7n\" (UniqueName: \"kubernetes.io/projected/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-kube-api-access-rkv7n\") pod \"neutron-8df6-account-create-update-cmvwn\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:29.391544 master-0 kubenswrapper[31456]: I0312 21:25:29.391449 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:31.428635 master-0 kubenswrapper[31456]: I0312 21:25:31.428538 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.177:5353: connect: connection refused" Mar 12 21:25:35.687919 master-0 kubenswrapper[31456]: I0312 21:25:35.687648 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:35.759975 master-0 kubenswrapper[31456]: I0312 21:25:35.759923 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-additional-scripts\") pod \"53aabeb1-168b-479a-aff0-b006d94a0650\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " Mar 12 21:25:35.760189 master-0 kubenswrapper[31456]: I0312 21:25:35.760068 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run\") pod \"53aabeb1-168b-479a-aff0-b006d94a0650\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " Mar 12 21:25:35.760189 master-0 kubenswrapper[31456]: I0312 21:25:35.760102 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-scripts\") pod \"53aabeb1-168b-479a-aff0-b006d94a0650\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " Mar 12 21:25:35.760189 master-0 kubenswrapper[31456]: I0312 21:25:35.760170 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvvx\" (UniqueName: \"kubernetes.io/projected/53aabeb1-168b-479a-aff0-b006d94a0650-kube-api-access-fhvvx\") pod \"53aabeb1-168b-479a-aff0-b006d94a0650\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " Mar 12 21:25:35.760189 master-0 kubenswrapper[31456]: I0312 21:25:35.760186 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run-ovn\") pod \"53aabeb1-168b-479a-aff0-b006d94a0650\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " Mar 12 21:25:35.760320 master-0 kubenswrapper[31456]: I0312 21:25:35.760220 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-log-ovn\") pod \"53aabeb1-168b-479a-aff0-b006d94a0650\" (UID: \"53aabeb1-168b-479a-aff0-b006d94a0650\") " Mar 12 21:25:35.761160 master-0 kubenswrapper[31456]: I0312 21:25:35.760417 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "53aabeb1-168b-479a-aff0-b006d94a0650" (UID: "53aabeb1-168b-479a-aff0-b006d94a0650"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:35.761160 master-0 kubenswrapper[31456]: I0312 21:25:35.760507 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "53aabeb1-168b-479a-aff0-b006d94a0650" (UID: "53aabeb1-168b-479a-aff0-b006d94a0650"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:25:35.761160 master-0 kubenswrapper[31456]: I0312 21:25:35.760794 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "53aabeb1-168b-479a-aff0-b006d94a0650" (UID: "53aabeb1-168b-479a-aff0-b006d94a0650"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:25:35.761160 master-0 kubenswrapper[31456]: I0312 21:25:35.760879 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run" (OuterVolumeSpecName: "var-run") pod "53aabeb1-168b-479a-aff0-b006d94a0650" (UID: "53aabeb1-168b-479a-aff0-b006d94a0650"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:25:35.761160 master-0 kubenswrapper[31456]: I0312 21:25:35.761133 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-scripts" (OuterVolumeSpecName: "scripts") pod "53aabeb1-168b-479a-aff0-b006d94a0650" (UID: "53aabeb1-168b-479a-aff0-b006d94a0650"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:35.761544 master-0 kubenswrapper[31456]: I0312 21:25:35.761198 31456 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:35.761544 master-0 kubenswrapper[31456]: I0312 21:25:35.761214 31456 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:35.761544 master-0 kubenswrapper[31456]: I0312 21:25:35.761225 31456 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/53aabeb1-168b-479a-aff0-b006d94a0650-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:35.761544 master-0 kubenswrapper[31456]: I0312 21:25:35.761236 31456 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:35.772459 master-0 kubenswrapper[31456]: I0312 21:25:35.768347 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53aabeb1-168b-479a-aff0-b006d94a0650-kube-api-access-fhvvx" (OuterVolumeSpecName: "kube-api-access-fhvvx") pod "53aabeb1-168b-479a-aff0-b006d94a0650" (UID: "53aabeb1-168b-479a-aff0-b006d94a0650"). InnerVolumeSpecName "kube-api-access-fhvvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:35.863982 master-0 kubenswrapper[31456]: I0312 21:25:35.863498 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53aabeb1-168b-479a-aff0-b006d94a0650-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:35.863982 master-0 kubenswrapper[31456]: I0312 21:25:35.863559 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvvx\" (UniqueName: \"kubernetes.io/projected/53aabeb1-168b-479a-aff0-b006d94a0650-kube-api-access-fhvvx\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:35.993653 master-0 kubenswrapper[31456]: I0312 21:25:35.993613 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:25:36.067566 master-0 kubenswrapper[31456]: I0312 21:25:36.067505 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fntvg\" (UniqueName: \"kubernetes.io/projected/9353def4-ea82-4589-9503-c32939b3ff21-kube-api-access-fntvg\") pod \"9353def4-ea82-4589-9503-c32939b3ff21\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " Mar 12 21:25:36.067742 master-0 kubenswrapper[31456]: I0312 21:25:36.067629 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-sb\") pod \"9353def4-ea82-4589-9503-c32939b3ff21\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " Mar 12 21:25:36.067742 master-0 kubenswrapper[31456]: I0312 21:25:36.067692 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-nb\") pod \"9353def4-ea82-4589-9503-c32939b3ff21\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " Mar 12 21:25:36.067856 master-0 kubenswrapper[31456]: I0312 21:25:36.067754 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-config\") pod \"9353def4-ea82-4589-9503-c32939b3ff21\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " Mar 12 21:25:36.067856 master-0 kubenswrapper[31456]: I0312 21:25:36.067783 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-dns-svc\") pod \"9353def4-ea82-4589-9503-c32939b3ff21\" (UID: \"9353def4-ea82-4589-9503-c32939b3ff21\") " Mar 12 21:25:36.072689 master-0 kubenswrapper[31456]: I0312 21:25:36.072612 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9353def4-ea82-4589-9503-c32939b3ff21-kube-api-access-fntvg" (OuterVolumeSpecName: "kube-api-access-fntvg") pod "9353def4-ea82-4589-9503-c32939b3ff21" (UID: "9353def4-ea82-4589-9503-c32939b3ff21"). InnerVolumeSpecName "kube-api-access-fntvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:36.110585 master-0 kubenswrapper[31456]: I0312 21:25:36.110506 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9353def4-ea82-4589-9503-c32939b3ff21" (UID: "9353def4-ea82-4589-9503-c32939b3ff21"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:36.112317 master-0 kubenswrapper[31456]: I0312 21:25:36.112269 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9353def4-ea82-4589-9503-c32939b3ff21" (UID: "9353def4-ea82-4589-9503-c32939b3ff21"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:36.118513 master-0 kubenswrapper[31456]: I0312 21:25:36.118470 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-config" (OuterVolumeSpecName: "config") pod "9353def4-ea82-4589-9503-c32939b3ff21" (UID: "9353def4-ea82-4589-9503-c32939b3ff21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:36.120190 master-0 kubenswrapper[31456]: I0312 21:25:36.120133 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9353def4-ea82-4589-9503-c32939b3ff21" (UID: "9353def4-ea82-4589-9503-c32939b3ff21"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:36.192666 master-0 kubenswrapper[31456]: I0312 21:25:36.183712 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fntvg\" (UniqueName: \"kubernetes.io/projected/9353def4-ea82-4589-9503-c32939b3ff21-kube-api-access-fntvg\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:36.192666 master-0 kubenswrapper[31456]: I0312 21:25:36.183772 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:36.192666 master-0 kubenswrapper[31456]: I0312 21:25:36.183785 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:36.192666 master-0 kubenswrapper[31456]: I0312 21:25:36.183921 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:36.192666 master-0 kubenswrapper[31456]: I0312 21:25:36.183930 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9353def4-ea82-4589-9503-c32939b3ff21-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:36.409409 master-0 kubenswrapper[31456]: I0312 21:25:36.408952 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ssg44"] Mar 12 21:25:36.423595 master-0 kubenswrapper[31456]: W0312 21:25:36.421897 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c813ae4_0bfc_4a61_b602_9ce03baad036.slice/crio-ea2a0ecad19d8fa9fc97c59772e5fea7555767dba5394dd26fe6a70f5c8853d5 WatchSource:0}: Error finding container ea2a0ecad19d8fa9fc97c59772e5fea7555767dba5394dd26fe6a70f5c8853d5: Status 404 returned error can't find the container with id ea2a0ecad19d8fa9fc97c59772e5fea7555767dba5394dd26fe6a70f5c8853d5 Mar 12 21:25:36.424794 master-0 kubenswrapper[31456]: W0312 21:25:36.424722 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c05e4bb_1dfc_47d7_b9f0_0c2fc22c8b30.slice/crio-9623fbc4de7d5a0f7adbaf88cdb534a6af177a8020b867e2a20c6201bbfc2b9d WatchSource:0}: Error finding container 9623fbc4de7d5a0f7adbaf88cdb534a6af177a8020b867e2a20c6201bbfc2b9d: Status 404 returned error can't find the container with id 9623fbc4de7d5a0f7adbaf88cdb534a6af177a8020b867e2a20c6201bbfc2b9d Mar 12 21:25:36.431074 master-0 kubenswrapper[31456]: W0312 21:25:36.430556 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f5b7eb2_f871_440e_889f_dd23a4a1e8ed.slice/crio-66191d8ff21f459370c53659efca5b85337d1b27cc95bd9a04a34c04f32121ae WatchSource:0}: Error finding container 66191d8ff21f459370c53659efca5b85337d1b27cc95bd9a04a34c04f32121ae: Status 404 returned error can't find the container with id 66191d8ff21f459370c53659efca5b85337d1b27cc95bd9a04a34c04f32121ae Mar 12 21:25:36.435879 master-0 kubenswrapper[31456]: I0312 21:25:36.433445 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fthjz"] Mar 12 21:25:36.455116 master-0 kubenswrapper[31456]: I0312 21:25:36.455043 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pjn56"] Mar 12 21:25:36.478624 master-0 kubenswrapper[31456]: I0312 21:25:36.478494 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8df6-account-create-update-cmvwn"] Mar 12 21:25:36.485040 master-0 kubenswrapper[31456]: I0312 21:25:36.484983 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ssg44" event={"ID":"6c813ae4-0bfc-4a61-b602-9ce03baad036","Type":"ContainerStarted","Data":"ea2a0ecad19d8fa9fc97c59772e5fea7555767dba5394dd26fe6a70f5c8853d5"} Mar 12 21:25:36.486038 master-0 kubenswrapper[31456]: I0312 21:25:36.486013 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fthjz" event={"ID":"eb0472a9-9d25-4efe-9032-c8afdc106678","Type":"ContainerStarted","Data":"c168d90b147d9e2fad82eefa2f01c41ac6f717acdb1ddded3ef64ebffc5e4bb3"} Mar 12 21:25:36.488388 master-0 kubenswrapper[31456]: I0312 21:25:36.488212 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b7rpf-config-98zc7" event={"ID":"53aabeb1-168b-479a-aff0-b006d94a0650","Type":"ContainerDied","Data":"ad325d67d565e2144efa2a11922dab2617e0d0684891149e1ee6bf54102d3f09"} Mar 12 21:25:36.488388 master-0 kubenswrapper[31456]: I0312 21:25:36.488238 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad325d67d565e2144efa2a11922dab2617e0d0684891149e1ee6bf54102d3f09" Mar 12 21:25:36.488388 master-0 kubenswrapper[31456]: I0312 21:25:36.488266 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b7rpf-config-98zc7" Mar 12 21:25:36.492184 master-0 kubenswrapper[31456]: I0312 21:25:36.492097 31456 generic.go:334] "Generic (PLEG): container finished" podID="9353def4-ea82-4589-9503-c32939b3ff21" containerID="e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755" exitCode=0 Mar 12 21:25:36.492184 master-0 kubenswrapper[31456]: I0312 21:25:36.492162 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" event={"ID":"9353def4-ea82-4589-9503-c32939b3ff21","Type":"ContainerDied","Data":"e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755"} Mar 12 21:25:36.492323 master-0 kubenswrapper[31456]: I0312 21:25:36.492192 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" event={"ID":"9353def4-ea82-4589-9503-c32939b3ff21","Type":"ContainerDied","Data":"bca8f9297364b9de1b80a0f9240b80111913068278344eb8c8396cde388386a0"} Mar 12 21:25:36.492323 master-0 kubenswrapper[31456]: I0312 21:25:36.492210 31456 scope.go:117] "RemoveContainer" containerID="e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755" Mar 12 21:25:36.492323 master-0 kubenswrapper[31456]: I0312 21:25:36.492304 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-2xtgl" Mar 12 21:25:36.501593 master-0 kubenswrapper[31456]: I0312 21:25:36.501548 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pjn56" event={"ID":"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30","Type":"ContainerStarted","Data":"9623fbc4de7d5a0f7adbaf88cdb534a6af177a8020b867e2a20c6201bbfc2b9d"} Mar 12 21:25:36.503445 master-0 kubenswrapper[31456]: I0312 21:25:36.503422 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8df6-account-create-update-cmvwn" event={"ID":"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed","Type":"ContainerStarted","Data":"66191d8ff21f459370c53659efca5b85337d1b27cc95bd9a04a34c04f32121ae"} Mar 12 21:25:36.506240 master-0 kubenswrapper[31456]: I0312 21:25:36.506190 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qsh5p" event={"ID":"6b67fa12-637c-4880-b717-d46e768d3112","Type":"ContainerStarted","Data":"368468d679847a729afcf36bc52d6c60a0d0d285bc39d3167abddab4b80592d6"} Mar 12 21:25:36.529855 master-0 kubenswrapper[31456]: I0312 21:25:36.529421 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qsh5p" podStartSLOduration=2.5289708920000002 podStartE2EDuration="18.52940346s" podCreationTimestamp="2026-03-12 21:25:18 +0000 UTC" firstStartedPulling="2026-03-12 21:25:19.589343251 +0000 UTC m=+980.663948579" lastFinishedPulling="2026-03-12 21:25:35.589775819 +0000 UTC m=+996.664381147" observedRunningTime="2026-03-12 21:25:36.523109117 +0000 UTC m=+997.597714445" watchObservedRunningTime="2026-03-12 21:25:36.52940346 +0000 UTC m=+997.604008788" Mar 12 21:25:36.583012 master-0 kubenswrapper[31456]: I0312 21:25:36.575765 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-23e5-account-create-update-qlhcj"] Mar 12 21:25:36.583012 master-0 kubenswrapper[31456]: I0312 21:25:36.581635 31456 scope.go:117] "RemoveContainer" containerID="9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea" Mar 12 21:25:36.597164 master-0 kubenswrapper[31456]: W0312 21:25:36.597107 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda01f2e87_21e3_433f_a65d_d6f66e6dd1f9.slice/crio-f694d8ae694555d7429b07b890dffd7adb7ecb6d8e1e919baee61485e8e6236e WatchSource:0}: Error finding container f694d8ae694555d7429b07b890dffd7adb7ecb6d8e1e919baee61485e8e6236e: Status 404 returned error can't find the container with id f694d8ae694555d7429b07b890dffd7adb7ecb6d8e1e919baee61485e8e6236e Mar 12 21:25:36.630477 master-0 kubenswrapper[31456]: I0312 21:25:36.630432 31456 scope.go:117] "RemoveContainer" containerID="e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755" Mar 12 21:25:36.631684 master-0 kubenswrapper[31456]: E0312 21:25:36.631505 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755\": container with ID starting with e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755 not found: ID does not exist" containerID="e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755" Mar 12 21:25:36.631684 master-0 kubenswrapper[31456]: I0312 21:25:36.631559 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755"} err="failed to get container status \"e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755\": rpc error: code = NotFound desc = could not find container \"e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755\": container with ID starting with e3c02c7f977ef12e294b9f8c95375dfc6794cf6a587ae7eabb3b43cd7a4bb755 not found: ID does not exist" Mar 12 21:25:36.631684 master-0 kubenswrapper[31456]: I0312 21:25:36.631584 31456 scope.go:117] "RemoveContainer" containerID="9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea" Mar 12 21:25:36.632585 master-0 kubenswrapper[31456]: E0312 21:25:36.632549 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea\": container with ID starting with 9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea not found: ID does not exist" containerID="9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea" Mar 12 21:25:36.632644 master-0 kubenswrapper[31456]: I0312 21:25:36.632594 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea"} err="failed to get container status \"9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea\": rpc error: code = NotFound desc = could not find container \"9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea\": container with ID starting with 9786d542ec77ccbe0ae779a57e603460b05761696ca09aa45bb28b4573fa50ea not found: ID does not exist" Mar 12 21:25:36.687789 master-0 kubenswrapper[31456]: I0312 21:25:36.687731 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-2xtgl"] Mar 12 21:25:36.699867 master-0 kubenswrapper[31456]: I0312 21:25:36.699604 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-2xtgl"] Mar 12 21:25:36.849923 master-0 kubenswrapper[31456]: I0312 21:25:36.848180 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b7rpf-config-98zc7"] Mar 12 21:25:36.863268 master-0 kubenswrapper[31456]: I0312 21:25:36.863116 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b7rpf-config-98zc7"] Mar 12 21:25:37.202677 master-0 kubenswrapper[31456]: I0312 21:25:37.202625 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53aabeb1-168b-479a-aff0-b006d94a0650" path="/var/lib/kubelet/pods/53aabeb1-168b-479a-aff0-b006d94a0650/volumes" Mar 12 21:25:37.203392 master-0 kubenswrapper[31456]: I0312 21:25:37.203202 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9353def4-ea82-4589-9503-c32939b3ff21" path="/var/lib/kubelet/pods/9353def4-ea82-4589-9503-c32939b3ff21/volumes" Mar 12 21:25:37.524178 master-0 kubenswrapper[31456]: I0312 21:25:37.524104 31456 generic.go:334] "Generic (PLEG): container finished" podID="2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" containerID="62f6b60066e2a983f4f53dd62f58c0e0b3609fcdcd8b19fd681c89d45293f605" exitCode=0 Mar 12 21:25:37.524178 master-0 kubenswrapper[31456]: I0312 21:25:37.524166 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8df6-account-create-update-cmvwn" event={"ID":"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed","Type":"ContainerDied","Data":"62f6b60066e2a983f4f53dd62f58c0e0b3609fcdcd8b19fd681c89d45293f605"} Mar 12 21:25:37.527908 master-0 kubenswrapper[31456]: I0312 21:25:37.527206 31456 generic.go:334] "Generic (PLEG): container finished" podID="a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" containerID="0a4625afa4a66eefb02168cff5c642c57587b055adc60d29d0140dde0ef67a31" exitCode=0 Mar 12 21:25:37.527908 master-0 kubenswrapper[31456]: I0312 21:25:37.527256 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23e5-account-create-update-qlhcj" event={"ID":"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9","Type":"ContainerDied","Data":"0a4625afa4a66eefb02168cff5c642c57587b055adc60d29d0140dde0ef67a31"} Mar 12 21:25:37.527908 master-0 kubenswrapper[31456]: I0312 21:25:37.527303 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23e5-account-create-update-qlhcj" event={"ID":"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9","Type":"ContainerStarted","Data":"f694d8ae694555d7429b07b890dffd7adb7ecb6d8e1e919baee61485e8e6236e"} Mar 12 21:25:37.530512 master-0 kubenswrapper[31456]: I0312 21:25:37.530472 31456 generic.go:334] "Generic (PLEG): container finished" podID="6c813ae4-0bfc-4a61-b602-9ce03baad036" containerID="636fa020d32ac292f4db5f9c08359c4143f5d1347d4c61e2b448491ab3aabc57" exitCode=0 Mar 12 21:25:37.530575 master-0 kubenswrapper[31456]: I0312 21:25:37.530559 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ssg44" event={"ID":"6c813ae4-0bfc-4a61-b602-9ce03baad036","Type":"ContainerDied","Data":"636fa020d32ac292f4db5f9c08359c4143f5d1347d4c61e2b448491ab3aabc57"} Mar 12 21:25:37.536783 master-0 kubenswrapper[31456]: I0312 21:25:37.536705 31456 generic.go:334] "Generic (PLEG): container finished" podID="0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" containerID="eba87c32798ea27e11c0f3cf772e678c9622bf0d7873bd044359cc9c807ec6d8" exitCode=0 Mar 12 21:25:37.536783 master-0 kubenswrapper[31456]: I0312 21:25:37.536758 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pjn56" event={"ID":"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30","Type":"ContainerDied","Data":"eba87c32798ea27e11c0f3cf772e678c9622bf0d7873bd044359cc9c807ec6d8"} Mar 12 21:25:41.497675 master-0 kubenswrapper[31456]: I0312 21:25:41.496543 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:41.528458 master-0 kubenswrapper[31456]: I0312 21:25:41.528372 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-operator-scripts\") pod \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " Mar 12 21:25:41.529041 master-0 kubenswrapper[31456]: I0312 21:25:41.528900 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" (UID: "0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:41.529360 master-0 kubenswrapper[31456]: I0312 21:25:41.529279 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c4hq\" (UniqueName: \"kubernetes.io/projected/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-kube-api-access-2c4hq\") pod \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\" (UID: \"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30\") " Mar 12 21:25:41.530565 master-0 kubenswrapper[31456]: I0312 21:25:41.530510 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.558232 master-0 kubenswrapper[31456]: I0312 21:25:41.558129 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-kube-api-access-2c4hq" (OuterVolumeSpecName: "kube-api-access-2c4hq") pod "0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" (UID: "0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30"). InnerVolumeSpecName "kube-api-access-2c4hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:41.591902 master-0 kubenswrapper[31456]: I0312 21:25:41.591835 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-23e5-account-create-update-qlhcj" event={"ID":"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9","Type":"ContainerDied","Data":"f694d8ae694555d7429b07b890dffd7adb7ecb6d8e1e919baee61485e8e6236e"} Mar 12 21:25:41.591902 master-0 kubenswrapper[31456]: I0312 21:25:41.591902 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f694d8ae694555d7429b07b890dffd7adb7ecb6d8e1e919baee61485e8e6236e" Mar 12 21:25:41.593520 master-0 kubenswrapper[31456]: I0312 21:25:41.593457 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ssg44" event={"ID":"6c813ae4-0bfc-4a61-b602-9ce03baad036","Type":"ContainerDied","Data":"ea2a0ecad19d8fa9fc97c59772e5fea7555767dba5394dd26fe6a70f5c8853d5"} Mar 12 21:25:41.593520 master-0 kubenswrapper[31456]: I0312 21:25:41.593514 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2a0ecad19d8fa9fc97c59772e5fea7555767dba5394dd26fe6a70f5c8853d5" Mar 12 21:25:41.595495 master-0 kubenswrapper[31456]: I0312 21:25:41.595446 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pjn56" event={"ID":"0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30","Type":"ContainerDied","Data":"9623fbc4de7d5a0f7adbaf88cdb534a6af177a8020b867e2a20c6201bbfc2b9d"} Mar 12 21:25:41.595495 master-0 kubenswrapper[31456]: I0312 21:25:41.595486 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9623fbc4de7d5a0f7adbaf88cdb534a6af177a8020b867e2a20c6201bbfc2b9d" Mar 12 21:25:41.595636 master-0 kubenswrapper[31456]: I0312 21:25:41.595585 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pjn56" Mar 12 21:25:41.609116 master-0 kubenswrapper[31456]: I0312 21:25:41.609055 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8df6-account-create-update-cmvwn" event={"ID":"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed","Type":"ContainerDied","Data":"66191d8ff21f459370c53659efca5b85337d1b27cc95bd9a04a34c04f32121ae"} Mar 12 21:25:41.609116 master-0 kubenswrapper[31456]: I0312 21:25:41.609110 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66191d8ff21f459370c53659efca5b85337d1b27cc95bd9a04a34c04f32121ae" Mar 12 21:25:41.631863 master-0 kubenswrapper[31456]: I0312 21:25:41.631790 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c4hq\" (UniqueName: \"kubernetes.io/projected/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30-kube-api-access-2c4hq\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.652780 master-0 kubenswrapper[31456]: I0312 21:25:41.645079 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:41.667466 master-0 kubenswrapper[31456]: I0312 21:25:41.667417 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:41.680616 master-0 kubenswrapper[31456]: I0312 21:25:41.680134 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.737905 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkv7n\" (UniqueName: \"kubernetes.io/projected/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-kube-api-access-rkv7n\") pod \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.738094 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjm4h\" (UniqueName: \"kubernetes.io/projected/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-kube-api-access-wjm4h\") pod \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.738200 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-operator-scripts\") pod \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\" (UID: \"a01f2e87-21e3-433f-a65d-d6f66e6dd1f9\") " Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.738273 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt8hc\" (UniqueName: \"kubernetes.io/projected/6c813ae4-0bfc-4a61-b602-9ce03baad036-kube-api-access-zt8hc\") pod \"6c813ae4-0bfc-4a61-b602-9ce03baad036\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.738293 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c813ae4-0bfc-4a61-b602-9ce03baad036-operator-scripts\") pod \"6c813ae4-0bfc-4a61-b602-9ce03baad036\" (UID: \"6c813ae4-0bfc-4a61-b602-9ce03baad036\") " Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.738324 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-operator-scripts\") pod \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\" (UID: \"2f5b7eb2-f871-440e-889f-dd23a4a1e8ed\") " Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.739230 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" (UID: "2f5b7eb2-f871-440e-889f-dd23a4a1e8ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:41.741430 master-0 kubenswrapper[31456]: I0312 21:25:41.739526 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" (UID: "a01f2e87-21e3-433f-a65d-d6f66e6dd1f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:41.752543 master-0 kubenswrapper[31456]: I0312 21:25:41.752479 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-kube-api-access-wjm4h" (OuterVolumeSpecName: "kube-api-access-wjm4h") pod "a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" (UID: "a01f2e87-21e3-433f-a65d-d6f66e6dd1f9"). InnerVolumeSpecName "kube-api-access-wjm4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:41.753086 master-0 kubenswrapper[31456]: I0312 21:25:41.753061 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c813ae4-0bfc-4a61-b602-9ce03baad036-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c813ae4-0bfc-4a61-b602-9ce03baad036" (UID: "6c813ae4-0bfc-4a61-b602-9ce03baad036"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:41.759223 master-0 kubenswrapper[31456]: I0312 21:25:41.759143 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c813ae4-0bfc-4a61-b602-9ce03baad036-kube-api-access-zt8hc" (OuterVolumeSpecName: "kube-api-access-zt8hc") pod "6c813ae4-0bfc-4a61-b602-9ce03baad036" (UID: "6c813ae4-0bfc-4a61-b602-9ce03baad036"). InnerVolumeSpecName "kube-api-access-zt8hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:41.767142 master-0 kubenswrapper[31456]: I0312 21:25:41.767085 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-kube-api-access-rkv7n" (OuterVolumeSpecName: "kube-api-access-rkv7n") pod "2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" (UID: "2f5b7eb2-f871-440e-889f-dd23a4a1e8ed"). InnerVolumeSpecName "kube-api-access-rkv7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:41.841444 master-0 kubenswrapper[31456]: I0312 21:25:41.841298 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjm4h\" (UniqueName: \"kubernetes.io/projected/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-kube-api-access-wjm4h\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.841444 master-0 kubenswrapper[31456]: I0312 21:25:41.841362 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.841444 master-0 kubenswrapper[31456]: I0312 21:25:41.841376 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt8hc\" (UniqueName: \"kubernetes.io/projected/6c813ae4-0bfc-4a61-b602-9ce03baad036-kube-api-access-zt8hc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.841444 master-0 kubenswrapper[31456]: I0312 21:25:41.841389 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c813ae4-0bfc-4a61-b602-9ce03baad036-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.841444 master-0 kubenswrapper[31456]: I0312 21:25:41.841404 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:41.841444 master-0 kubenswrapper[31456]: I0312 21:25:41.841416 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkv7n\" (UniqueName: \"kubernetes.io/projected/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed-kube-api-access-rkv7n\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:42.620831 master-0 kubenswrapper[31456]: I0312 21:25:42.620705 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ssg44" Mar 12 21:25:42.622019 master-0 kubenswrapper[31456]: I0312 21:25:42.621976 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fthjz" event={"ID":"eb0472a9-9d25-4efe-9032-c8afdc106678","Type":"ContainerStarted","Data":"f0f75010363ea1d1b63b0c48cf5b36b2d580f290ca8eb13143657336358bc9b9"} Mar 12 21:25:42.622107 master-0 kubenswrapper[31456]: I0312 21:25:42.622059 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-23e5-account-create-update-qlhcj" Mar 12 21:25:42.622263 master-0 kubenswrapper[31456]: I0312 21:25:42.622197 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8df6-account-create-update-cmvwn" Mar 12 21:25:42.654823 master-0 kubenswrapper[31456]: I0312 21:25:42.653415 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fthjz" podStartSLOduration=9.57723804 podStartE2EDuration="14.653398669s" podCreationTimestamp="2026-03-12 21:25:28 +0000 UTC" firstStartedPulling="2026-03-12 21:25:36.418146717 +0000 UTC m=+997.492752055" lastFinishedPulling="2026-03-12 21:25:41.494307356 +0000 UTC m=+1002.568912684" observedRunningTime="2026-03-12 21:25:42.647217379 +0000 UTC m=+1003.721822717" watchObservedRunningTime="2026-03-12 21:25:42.653398669 +0000 UTC m=+1003.728003987" Mar 12 21:25:46.676984 master-0 kubenswrapper[31456]: I0312 21:25:46.676915 31456 generic.go:334] "Generic (PLEG): container finished" podID="eb0472a9-9d25-4efe-9032-c8afdc106678" containerID="f0f75010363ea1d1b63b0c48cf5b36b2d580f290ca8eb13143657336358bc9b9" exitCode=0 Mar 12 21:25:46.676984 master-0 kubenswrapper[31456]: I0312 21:25:46.676959 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fthjz" event={"ID":"eb0472a9-9d25-4efe-9032-c8afdc106678","Type":"ContainerDied","Data":"f0f75010363ea1d1b63b0c48cf5b36b2d580f290ca8eb13143657336358bc9b9"} Mar 12 21:25:47.717126 master-0 kubenswrapper[31456]: I0312 21:25:47.713894 31456 generic.go:334] "Generic (PLEG): container finished" podID="6b67fa12-637c-4880-b717-d46e768d3112" containerID="368468d679847a729afcf36bc52d6c60a0d0d285bc39d3167abddab4b80592d6" exitCode=0 Mar 12 21:25:47.717126 master-0 kubenswrapper[31456]: I0312 21:25:47.714029 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qsh5p" event={"ID":"6b67fa12-637c-4880-b717-d46e768d3112","Type":"ContainerDied","Data":"368468d679847a729afcf36bc52d6c60a0d0d285bc39d3167abddab4b80592d6"} Mar 12 21:25:48.159580 master-0 kubenswrapper[31456]: I0312 21:25:48.159528 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:48.296970 master-0 kubenswrapper[31456]: I0312 21:25:48.296817 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-combined-ca-bundle\") pod \"eb0472a9-9d25-4efe-9032-c8afdc106678\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " Mar 12 21:25:48.297186 master-0 kubenswrapper[31456]: I0312 21:25:48.296999 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qlhc\" (UniqueName: \"kubernetes.io/projected/eb0472a9-9d25-4efe-9032-c8afdc106678-kube-api-access-6qlhc\") pod \"eb0472a9-9d25-4efe-9032-c8afdc106678\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " Mar 12 21:25:48.297186 master-0 kubenswrapper[31456]: I0312 21:25:48.297126 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-config-data\") pod \"eb0472a9-9d25-4efe-9032-c8afdc106678\" (UID: \"eb0472a9-9d25-4efe-9032-c8afdc106678\") " Mar 12 21:25:48.304371 master-0 kubenswrapper[31456]: I0312 21:25:48.304303 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0472a9-9d25-4efe-9032-c8afdc106678-kube-api-access-6qlhc" (OuterVolumeSpecName: "kube-api-access-6qlhc") pod "eb0472a9-9d25-4efe-9032-c8afdc106678" (UID: "eb0472a9-9d25-4efe-9032-c8afdc106678"). InnerVolumeSpecName "kube-api-access-6qlhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:48.327342 master-0 kubenswrapper[31456]: I0312 21:25:48.327270 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb0472a9-9d25-4efe-9032-c8afdc106678" (UID: "eb0472a9-9d25-4efe-9032-c8afdc106678"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:48.362926 master-0 kubenswrapper[31456]: I0312 21:25:48.362336 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-config-data" (OuterVolumeSpecName: "config-data") pod "eb0472a9-9d25-4efe-9032-c8afdc106678" (UID: "eb0472a9-9d25-4efe-9032-c8afdc106678"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:48.400779 master-0 kubenswrapper[31456]: I0312 21:25:48.400708 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qlhc\" (UniqueName: \"kubernetes.io/projected/eb0472a9-9d25-4efe-9032-c8afdc106678-kube-api-access-6qlhc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:48.400779 master-0 kubenswrapper[31456]: I0312 21:25:48.400778 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:48.401018 master-0 kubenswrapper[31456]: I0312 21:25:48.400800 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0472a9-9d25-4efe-9032-c8afdc106678-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:48.736178 master-0 kubenswrapper[31456]: I0312 21:25:48.736105 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fthjz" Mar 12 21:25:48.737076 master-0 kubenswrapper[31456]: I0312 21:25:48.736611 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fthjz" event={"ID":"eb0472a9-9d25-4efe-9032-c8afdc106678","Type":"ContainerDied","Data":"c168d90b147d9e2fad82eefa2f01c41ac6f717acdb1ddded3ef64ebffc5e4bb3"} Mar 12 21:25:48.737076 master-0 kubenswrapper[31456]: I0312 21:25:48.736706 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c168d90b147d9e2fad82eefa2f01c41ac6f717acdb1ddded3ef64ebffc5e4bb3" Mar 12 21:25:49.504122 master-0 kubenswrapper[31456]: I0312 21:25:49.504069 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:49.641592 master-0 kubenswrapper[31456]: I0312 21:25:49.641532 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-combined-ca-bundle\") pod \"6b67fa12-637c-4880-b717-d46e768d3112\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " Mar 12 21:25:49.641913 master-0 kubenswrapper[31456]: I0312 21:25:49.641891 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqqm7\" (UniqueName: \"kubernetes.io/projected/6b67fa12-637c-4880-b717-d46e768d3112-kube-api-access-xqqm7\") pod \"6b67fa12-637c-4880-b717-d46e768d3112\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " Mar 12 21:25:49.642045 master-0 kubenswrapper[31456]: I0312 21:25:49.642028 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-config-data\") pod \"6b67fa12-637c-4880-b717-d46e768d3112\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " Mar 12 21:25:49.642386 master-0 kubenswrapper[31456]: I0312 21:25:49.642372 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-db-sync-config-data\") pod \"6b67fa12-637c-4880-b717-d46e768d3112\" (UID: \"6b67fa12-637c-4880-b717-d46e768d3112\") " Mar 12 21:25:49.645025 master-0 kubenswrapper[31456]: I0312 21:25:49.644889 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b67fa12-637c-4880-b717-d46e768d3112-kube-api-access-xqqm7" (OuterVolumeSpecName: "kube-api-access-xqqm7") pod "6b67fa12-637c-4880-b717-d46e768d3112" (UID: "6b67fa12-637c-4880-b717-d46e768d3112"). InnerVolumeSpecName "kube-api-access-xqqm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:49.645589 master-0 kubenswrapper[31456]: I0312 21:25:49.645534 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6b67fa12-637c-4880-b717-d46e768d3112" (UID: "6b67fa12-637c-4880-b717-d46e768d3112"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:49.675683 master-0 kubenswrapper[31456]: I0312 21:25:49.675622 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b67fa12-637c-4880-b717-d46e768d3112" (UID: "6b67fa12-637c-4880-b717-d46e768d3112"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:49.719856 master-0 kubenswrapper[31456]: I0312 21:25:49.719786 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-config-data" (OuterVolumeSpecName: "config-data") pod "6b67fa12-637c-4880-b717-d46e768d3112" (UID: "6b67fa12-637c-4880-b717-d46e768d3112"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:49.747531 master-0 kubenswrapper[31456]: I0312 21:25:49.746005 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqqm7\" (UniqueName: \"kubernetes.io/projected/6b67fa12-637c-4880-b717-d46e768d3112-kube-api-access-xqqm7\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:49.747531 master-0 kubenswrapper[31456]: I0312 21:25:49.746055 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:49.747531 master-0 kubenswrapper[31456]: I0312 21:25:49.746066 31456 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:49.747531 master-0 kubenswrapper[31456]: I0312 21:25:49.746074 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b67fa12-637c-4880-b717-d46e768d3112-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:49.779194 master-0 kubenswrapper[31456]: I0312 21:25:49.778982 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qsh5p" event={"ID":"6b67fa12-637c-4880-b717-d46e768d3112","Type":"ContainerDied","Data":"67a909ced2bcec97d6e28ae6fbc96e19fe0e95d1d6236b0523414116e47b75c6"} Mar 12 21:25:49.779194 master-0 kubenswrapper[31456]: I0312 21:25:49.779063 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a909ced2bcec97d6e28ae6fbc96e19fe0e95d1d6236b0523414116e47b75c6" Mar 12 21:25:49.779194 master-0 kubenswrapper[31456]: I0312 21:25:49.779135 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qsh5p" Mar 12 21:25:50.185492 master-0 kubenswrapper[31456]: I0312 21:25:50.185416 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6p46b"] Mar 12 21:25:50.186611 master-0 kubenswrapper[31456]: E0312 21:25:50.186575 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c813ae4-0bfc-4a61-b602-9ce03baad036" containerName="mariadb-database-create" Mar 12 21:25:50.186915 master-0 kubenswrapper[31456]: I0312 21:25:50.186782 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c813ae4-0bfc-4a61-b602-9ce03baad036" containerName="mariadb-database-create" Mar 12 21:25:50.187106 master-0 kubenswrapper[31456]: E0312 21:25:50.187079 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" containerName="mariadb-account-create-update" Mar 12 21:25:50.187251 master-0 kubenswrapper[31456]: I0312 21:25:50.187228 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" containerName="mariadb-account-create-update" Mar 12 21:25:50.187421 master-0 kubenswrapper[31456]: E0312 21:25:50.187397 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" containerName="mariadb-database-create" Mar 12 21:25:50.187559 master-0 kubenswrapper[31456]: I0312 21:25:50.187536 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" containerName="mariadb-database-create" Mar 12 21:25:50.187715 master-0 kubenswrapper[31456]: E0312 21:25:50.187693 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="init" Mar 12 21:25:50.187898 master-0 kubenswrapper[31456]: I0312 21:25:50.187873 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="init" Mar 12 21:25:50.188061 master-0 kubenswrapper[31456]: E0312 21:25:50.188035 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b67fa12-637c-4880-b717-d46e768d3112" containerName="glance-db-sync" Mar 12 21:25:50.188199 master-0 kubenswrapper[31456]: I0312 21:25:50.188177 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b67fa12-637c-4880-b717-d46e768d3112" containerName="glance-db-sync" Mar 12 21:25:50.188377 master-0 kubenswrapper[31456]: E0312 21:25:50.188354 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0472a9-9d25-4efe-9032-c8afdc106678" containerName="keystone-db-sync" Mar 12 21:25:50.188515 master-0 kubenswrapper[31456]: I0312 21:25:50.188492 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0472a9-9d25-4efe-9032-c8afdc106678" containerName="keystone-db-sync" Mar 12 21:25:50.188674 master-0 kubenswrapper[31456]: E0312 21:25:50.188650 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53aabeb1-168b-479a-aff0-b006d94a0650" containerName="ovn-config" Mar 12 21:25:50.188855 master-0 kubenswrapper[31456]: I0312 21:25:50.188830 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="53aabeb1-168b-479a-aff0-b006d94a0650" containerName="ovn-config" Mar 12 21:25:50.189035 master-0 kubenswrapper[31456]: E0312 21:25:50.189012 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" containerName="mariadb-account-create-update" Mar 12 21:25:50.189163 master-0 kubenswrapper[31456]: I0312 21:25:50.189141 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" containerName="mariadb-account-create-update" Mar 12 21:25:50.189309 master-0 kubenswrapper[31456]: E0312 21:25:50.189286 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="dnsmasq-dns" Mar 12 21:25:50.189449 master-0 kubenswrapper[31456]: I0312 21:25:50.189427 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="dnsmasq-dns" Mar 12 21:25:50.190096 master-0 kubenswrapper[31456]: I0312 21:25:50.190061 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" containerName="mariadb-account-create-update" Mar 12 21:25:50.190279 master-0 kubenswrapper[31456]: I0312 21:25:50.190255 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0472a9-9d25-4efe-9032-c8afdc106678" containerName="keystone-db-sync" Mar 12 21:25:50.190501 master-0 kubenswrapper[31456]: I0312 21:25:50.190474 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" containerName="mariadb-account-create-update" Mar 12 21:25:50.190692 master-0 kubenswrapper[31456]: I0312 21:25:50.190666 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="53aabeb1-168b-479a-aff0-b006d94a0650" containerName="ovn-config" Mar 12 21:25:50.190946 master-0 kubenswrapper[31456]: I0312 21:25:50.190910 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="9353def4-ea82-4589-9503-c32939b3ff21" containerName="dnsmasq-dns" Mar 12 21:25:50.191122 master-0 kubenswrapper[31456]: I0312 21:25:50.191098 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" containerName="mariadb-database-create" Mar 12 21:25:50.191292 master-0 kubenswrapper[31456]: I0312 21:25:50.191269 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c813ae4-0bfc-4a61-b602-9ce03baad036" containerName="mariadb-database-create" Mar 12 21:25:50.191478 master-0 kubenswrapper[31456]: I0312 21:25:50.191454 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b67fa12-637c-4880-b717-d46e768d3112" containerName="glance-db-sync" Mar 12 21:25:50.192801 master-0 kubenswrapper[31456]: I0312 21:25:50.192765 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.202624 master-0 kubenswrapper[31456]: I0312 21:25:50.202571 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 21:25:50.202986 master-0 kubenswrapper[31456]: I0312 21:25:50.202938 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 21:25:50.203111 master-0 kubenswrapper[31456]: I0312 21:25:50.202586 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 21:25:50.203214 master-0 kubenswrapper[31456]: I0312 21:25:50.202698 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 21:25:50.356283 master-0 kubenswrapper[31456]: I0312 21:25:50.356203 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-scripts\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.356283 master-0 kubenswrapper[31456]: I0312 21:25:50.356290 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-config-data\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.356590 master-0 kubenswrapper[31456]: I0312 21:25:50.356367 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-fernet-keys\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.356590 master-0 kubenswrapper[31456]: I0312 21:25:50.356406 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-credential-keys\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.356590 master-0 kubenswrapper[31456]: I0312 21:25:50.356477 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxk2x\" (UniqueName: \"kubernetes.io/projected/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-kube-api-access-gxk2x\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.356718 master-0 kubenswrapper[31456]: I0312 21:25:50.356641 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-combined-ca-bundle\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.460742 master-0 kubenswrapper[31456]: I0312 21:25:50.458738 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-combined-ca-bundle\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.460742 master-0 kubenswrapper[31456]: I0312 21:25:50.458866 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-scripts\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.460742 master-0 kubenswrapper[31456]: I0312 21:25:50.458905 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-config-data\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.460742 master-0 kubenswrapper[31456]: I0312 21:25:50.458974 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-fernet-keys\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.460742 master-0 kubenswrapper[31456]: I0312 21:25:50.459006 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-credential-keys\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.460742 master-0 kubenswrapper[31456]: I0312 21:25:50.459055 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxk2x\" (UniqueName: \"kubernetes.io/projected/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-kube-api-access-gxk2x\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.467957 master-0 kubenswrapper[31456]: I0312 21:25:50.463968 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-combined-ca-bundle\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.467957 master-0 kubenswrapper[31456]: I0312 21:25:50.464647 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-868b5796f7-9rqnq"] Mar 12 21:25:50.467957 master-0 kubenswrapper[31456]: I0312 21:25:50.466370 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-config-data\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.467957 master-0 kubenswrapper[31456]: I0312 21:25:50.467130 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-fernet-keys\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.482558 master-0 kubenswrapper[31456]: I0312 21:25:50.470265 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-credential-keys\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.482558 master-0 kubenswrapper[31456]: I0312 21:25:50.476664 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.489663 master-0 kubenswrapper[31456]: I0312 21:25:50.487735 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-scripts\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.494134 master-0 kubenswrapper[31456]: I0312 21:25:50.493937 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6p46b"] Mar 12 21:25:50.561428 master-0 kubenswrapper[31456]: I0312 21:25:50.560675 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgn6q\" (UniqueName: \"kubernetes.io/projected/4392b001-e025-49ef-8123-160f9e536da3-kube-api-access-kgn6q\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.561428 master-0 kubenswrapper[31456]: I0312 21:25:50.560841 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-config\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.561428 master-0 kubenswrapper[31456]: I0312 21:25:50.560904 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-svc\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.561428 master-0 kubenswrapper[31456]: I0312 21:25:50.560965 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-sb\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.561428 master-0 kubenswrapper[31456]: I0312 21:25:50.560993 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-swift-storage-0\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.561428 master-0 kubenswrapper[31456]: I0312 21:25:50.561144 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-nb\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.632839 master-0 kubenswrapper[31456]: I0312 21:25:50.624315 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxk2x\" (UniqueName: \"kubernetes.io/projected/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-kube-api-access-gxk2x\") pod \"keystone-bootstrap-6p46b\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.681601 master-0 kubenswrapper[31456]: I0312 21:25:50.665233 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-868b5796f7-9rqnq"] Mar 12 21:25:50.691676 master-0 kubenswrapper[31456]: I0312 21:25:50.687317 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-nb\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.692538 master-0 kubenswrapper[31456]: I0312 21:25:50.692510 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgn6q\" (UniqueName: \"kubernetes.io/projected/4392b001-e025-49ef-8123-160f9e536da3-kube-api-access-kgn6q\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.692693 master-0 kubenswrapper[31456]: I0312 21:25:50.692676 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-config\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.692826 master-0 kubenswrapper[31456]: I0312 21:25:50.692796 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-svc\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.692945 master-0 kubenswrapper[31456]: I0312 21:25:50.692932 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-sb\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.693048 master-0 kubenswrapper[31456]: I0312 21:25:50.693036 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-swift-storage-0\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.693421 master-0 kubenswrapper[31456]: I0312 21:25:50.693347 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-nb\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.694168 master-0 kubenswrapper[31456]: I0312 21:25:50.694124 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-config\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.694406 master-0 kubenswrapper[31456]: I0312 21:25:50.694374 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-svc\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.694581 master-0 kubenswrapper[31456]: I0312 21:25:50.694563 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-swift-storage-0\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.694774 master-0 kubenswrapper[31456]: I0312 21:25:50.694745 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-sb\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.822443 master-0 kubenswrapper[31456]: I0312 21:25:50.822380 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:25:50.869396 master-0 kubenswrapper[31456]: I0312 21:25:50.869318 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgn6q\" (UniqueName: \"kubernetes.io/projected/4392b001-e025-49ef-8123-160f9e536da3-kube-api-access-kgn6q\") pod \"dnsmasq-dns-868b5796f7-9rqnq\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.871449 master-0 kubenswrapper[31456]: I0312 21:25:50.871394 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:50.970306 master-0 kubenswrapper[31456]: I0312 21:25:50.967330 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-tbph7"] Mar 12 21:25:50.970306 master-0 kubenswrapper[31456]: I0312 21:25:50.969027 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.104208 master-0 kubenswrapper[31456]: I0312 21:25:51.104062 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c569c591-2b26-40b5-b7d0-139ad6d98ea3-operator-scripts\") pod \"ironic-db-create-tbph7\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.104208 master-0 kubenswrapper[31456]: I0312 21:25:51.104181 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jlr\" (UniqueName: \"kubernetes.io/projected/c569c591-2b26-40b5-b7d0-139ad6d98ea3-kube-api-access-q9jlr\") pod \"ironic-db-create-tbph7\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.126535 master-0 kubenswrapper[31456]: I0312 21:25:51.126453 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-tbph7"] Mar 12 21:25:51.207450 master-0 kubenswrapper[31456]: I0312 21:25:51.206987 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9jlr\" (UniqueName: \"kubernetes.io/projected/c569c591-2b26-40b5-b7d0-139ad6d98ea3-kube-api-access-q9jlr\") pod \"ironic-db-create-tbph7\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.207450 master-0 kubenswrapper[31456]: I0312 21:25:51.207150 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c569c591-2b26-40b5-b7d0-139ad6d98ea3-operator-scripts\") pod \"ironic-db-create-tbph7\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.207918 master-0 kubenswrapper[31456]: I0312 21:25:51.207885 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c569c591-2b26-40b5-b7d0-139ad6d98ea3-operator-scripts\") pod \"ironic-db-create-tbph7\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.381947 master-0 kubenswrapper[31456]: I0312 21:25:51.381459 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-db-sync-v8z2w"] Mar 12 21:25:51.382834 master-0 kubenswrapper[31456]: I0312 21:25:51.382814 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.387148 master-0 kubenswrapper[31456]: I0312 21:25:51.384886 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-config-data" Mar 12 21:25:51.387148 master-0 kubenswrapper[31456]: I0312 21:25:51.385242 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-scripts" Mar 12 21:25:51.399121 master-0 kubenswrapper[31456]: I0312 21:25:51.398755 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qs8v4"] Mar 12 21:25:51.401066 master-0 kubenswrapper[31456]: I0312 21:25:51.401035 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.402934 master-0 kubenswrapper[31456]: I0312 21:25:51.402905 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 21:25:51.403312 master-0 kubenswrapper[31456]: I0312 21:25:51.403289 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520010 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-combined-ca-bundle\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520084 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvcdn\" (UniqueName: \"kubernetes.io/projected/fdf62a30-2c59-4043-99d7-b51fe604f823-kube-api-access-kvcdn\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520133 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-config-data\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520193 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-combined-ca-bundle\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520258 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-db-sync-config-data\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520303 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-config\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520325 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8psh\" (UniqueName: \"kubernetes.io/projected/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-kube-api-access-l8psh\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520386 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-etc-machine-id\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.521832 master-0 kubenswrapper[31456]: I0312 21:25:51.520416 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-scripts\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.533826 master-0 kubenswrapper[31456]: I0312 21:25:51.532317 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-db-sync-v8z2w"] Mar 12 21:25:51.536887 master-0 kubenswrapper[31456]: I0312 21:25:51.536510 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9jlr\" (UniqueName: \"kubernetes.io/projected/c569c591-2b26-40b5-b7d0-139ad6d98ea3-kube-api-access-q9jlr\") pod \"ironic-db-create-tbph7\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.598888 master-0 kubenswrapper[31456]: I0312 21:25:51.597465 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.622732 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-db-sync-config-data\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.622789 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-config\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.622828 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8psh\" (UniqueName: \"kubernetes.io/projected/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-kube-api-access-l8psh\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.622884 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-etc-machine-id\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.622941 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-scripts\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.622983 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-combined-ca-bundle\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.623010 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvcdn\" (UniqueName: \"kubernetes.io/projected/fdf62a30-2c59-4043-99d7-b51fe604f823-kube-api-access-kvcdn\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.623038 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-config-data\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.624684 master-0 kubenswrapper[31456]: I0312 21:25:51.623090 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-combined-ca-bundle\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.626842 master-0 kubenswrapper[31456]: I0312 21:25:51.626797 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-combined-ca-bundle\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.627077 master-0 kubenswrapper[31456]: I0312 21:25:51.627024 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qs8v4"] Mar 12 21:25:51.633717 master-0 kubenswrapper[31456]: I0312 21:25:51.633606 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6p46b"] Mar 12 21:25:51.635587 master-0 kubenswrapper[31456]: I0312 21:25:51.635089 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-combined-ca-bundle\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.637452 master-0 kubenswrapper[31456]: I0312 21:25:51.637366 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-scripts\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.638059 master-0 kubenswrapper[31456]: I0312 21:25:51.637913 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-etc-machine-id\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.638059 master-0 kubenswrapper[31456]: I0312 21:25:51.637966 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-db-sync-config-data\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.645691 master-0 kubenswrapper[31456]: I0312 21:25:51.645631 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-config\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.647552 master-0 kubenswrapper[31456]: I0312 21:25:51.647514 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-config-data\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.663364 master-0 kubenswrapper[31456]: I0312 21:25:51.663313 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-868b5796f7-9rqnq"] Mar 12 21:25:51.792772 master-0 kubenswrapper[31456]: I0312 21:25:51.791947 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvcdn\" (UniqueName: \"kubernetes.io/projected/fdf62a30-2c59-4043-99d7-b51fe604f823-kube-api-access-kvcdn\") pod \"neutron-db-sync-qs8v4\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:51.798734 master-0 kubenswrapper[31456]: I0312 21:25:51.794614 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8psh\" (UniqueName: \"kubernetes.io/projected/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-kube-api-access-l8psh\") pod \"cinder-7fa7f-db-sync-v8z2w\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:51.801342 master-0 kubenswrapper[31456]: I0312 21:25:51.801265 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-31cc-account-create-update-pzkcd"] Mar 12 21:25:51.810276 master-0 kubenswrapper[31456]: I0312 21:25:51.803094 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:51.839481 master-0 kubenswrapper[31456]: I0312 21:25:51.836526 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 12 21:25:51.861899 master-0 kubenswrapper[31456]: I0312 21:25:51.857917 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6p46b" event={"ID":"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5","Type":"ContainerStarted","Data":"487f463d99d9e4b11c45f9fb3ea4f09a66e08ed05017b7b7f1cd8fecdfca52c9"} Mar 12 21:25:51.878878 master-0 kubenswrapper[31456]: I0312 21:25:51.869471 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" event={"ID":"4392b001-e025-49ef-8123-160f9e536da3","Type":"ContainerStarted","Data":"44f522b21bba397a2de6211fb0a81af7350ad342f7a94e70200159ea9238a791"} Mar 12 21:25:51.889590 master-0 kubenswrapper[31456]: I0312 21:25:51.889326 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-31cc-account-create-update-pzkcd"] Mar 12 21:25:51.939379 master-0 kubenswrapper[31456]: I0312 21:25:51.939055 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sn7v\" (UniqueName: \"kubernetes.io/projected/dd24a59e-fd16-4b56-acb2-3129dab7977a-kube-api-access-2sn7v\") pod \"ironic-31cc-account-create-update-pzkcd\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:51.939379 master-0 kubenswrapper[31456]: I0312 21:25:51.939130 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd24a59e-fd16-4b56-acb2-3129dab7977a-operator-scripts\") pod \"ironic-31cc-account-create-update-pzkcd\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:52.002830 master-0 kubenswrapper[31456]: I0312 21:25:52.002692 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:25:52.008244 master-0 kubenswrapper[31456]: I0312 21:25:52.006655 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-stkxt"] Mar 12 21:25:52.008551 master-0 kubenswrapper[31456]: I0312 21:25:52.008474 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.019332 master-0 kubenswrapper[31456]: I0312 21:25:52.019142 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 21:25:52.019685 master-0 kubenswrapper[31456]: I0312 21:25:52.019629 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 21:25:52.019749 master-0 kubenswrapper[31456]: I0312 21:25:52.019717 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:25:52.042768 master-0 kubenswrapper[31456]: I0312 21:25:52.040982 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sn7v\" (UniqueName: \"kubernetes.io/projected/dd24a59e-fd16-4b56-acb2-3129dab7977a-kube-api-access-2sn7v\") pod \"ironic-31cc-account-create-update-pzkcd\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:52.042768 master-0 kubenswrapper[31456]: I0312 21:25:52.041050 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd24a59e-fd16-4b56-acb2-3129dab7977a-operator-scripts\") pod \"ironic-31cc-account-create-update-pzkcd\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:52.042768 master-0 kubenswrapper[31456]: I0312 21:25:52.041800 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd24a59e-fd16-4b56-acb2-3129dab7977a-operator-scripts\") pod \"ironic-31cc-account-create-update-pzkcd\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:52.050072 master-0 kubenswrapper[31456]: I0312 21:25:52.049900 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-stkxt"] Mar 12 21:25:52.074452 master-0 kubenswrapper[31456]: I0312 21:25:52.074398 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sn7v\" (UniqueName: \"kubernetes.io/projected/dd24a59e-fd16-4b56-acb2-3129dab7977a-kube-api-access-2sn7v\") pod \"ironic-31cc-account-create-update-pzkcd\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:52.095400 master-0 kubenswrapper[31456]: I0312 21:25:52.094721 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-868b5796f7-9rqnq"] Mar 12 21:25:52.143332 master-0 kubenswrapper[31456]: I0312 21:25:52.142637 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk4zl\" (UniqueName: \"kubernetes.io/projected/b466beef-2d58-41e2-b8cf-8090ab10be4e-kube-api-access-vk4zl\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.143332 master-0 kubenswrapper[31456]: I0312 21:25:52.142912 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-combined-ca-bundle\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.143332 master-0 kubenswrapper[31456]: I0312 21:25:52.142977 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-scripts\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.143332 master-0 kubenswrapper[31456]: I0312 21:25:52.143058 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-config-data\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.143332 master-0 kubenswrapper[31456]: I0312 21:25:52.143093 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b466beef-2d58-41e2-b8cf-8090ab10be4e-logs\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.163009 master-0 kubenswrapper[31456]: I0312 21:25:52.162208 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:52.243834 master-0 kubenswrapper[31456]: I0312 21:25:52.229985 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8489b8449-72jp4"] Mar 12 21:25:52.243834 master-0 kubenswrapper[31456]: I0312 21:25:52.237170 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.261182 master-0 kubenswrapper[31456]: I0312 21:25:52.252485 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-config-data\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.261182 master-0 kubenswrapper[31456]: I0312 21:25:52.252547 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b466beef-2d58-41e2-b8cf-8090ab10be4e-logs\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.261182 master-0 kubenswrapper[31456]: I0312 21:25:52.252746 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk4zl\" (UniqueName: \"kubernetes.io/projected/b466beef-2d58-41e2-b8cf-8090ab10be4e-kube-api-access-vk4zl\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.261182 master-0 kubenswrapper[31456]: I0312 21:25:52.252782 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-combined-ca-bundle\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.261182 master-0 kubenswrapper[31456]: I0312 21:25:52.252818 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-scripts\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.261182 master-0 kubenswrapper[31456]: I0312 21:25:52.255742 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b466beef-2d58-41e2-b8cf-8090ab10be4e-logs\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.279384 master-0 kubenswrapper[31456]: I0312 21:25:52.261985 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-scripts\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.279384 master-0 kubenswrapper[31456]: I0312 21:25:52.278307 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8489b8449-72jp4"] Mar 12 21:25:52.279384 master-0 kubenswrapper[31456]: I0312 21:25:52.278508 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-combined-ca-bundle\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.302174 master-0 kubenswrapper[31456]: I0312 21:25:52.297592 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-config-data\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.332844 master-0 kubenswrapper[31456]: I0312 21:25:52.331971 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk4zl\" (UniqueName: \"kubernetes.io/projected/b466beef-2d58-41e2-b8cf-8090ab10be4e-kube-api-access-vk4zl\") pod \"placement-db-sync-stkxt\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.369872 master-0 kubenswrapper[31456]: I0312 21:25:52.369227 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8489b8449-72jp4"] Mar 12 21:25:52.370059 master-0 kubenswrapper[31456]: E0312 21:25:52.370029 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-q45cl ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-8489b8449-72jp4" podUID="7e4d47e6-8dce-4f45-bcff-c57b548bb699" Mar 12 21:25:52.376234 master-0 kubenswrapper[31456]: I0312 21:25:52.376100 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-sb\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.376234 master-0 kubenswrapper[31456]: I0312 21:25:52.376165 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-config\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.376466 master-0 kubenswrapper[31456]: I0312 21:25:52.376271 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q45cl\" (UniqueName: \"kubernetes.io/projected/7e4d47e6-8dce-4f45-bcff-c57b548bb699-kube-api-access-q45cl\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.376466 master-0 kubenswrapper[31456]: I0312 21:25:52.376293 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-svc\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.376466 master-0 kubenswrapper[31456]: I0312 21:25:52.376337 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-nb\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.376466 master-0 kubenswrapper[31456]: I0312 21:25:52.376372 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-swift-storage-0\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.399210 master-0 kubenswrapper[31456]: I0312 21:25:52.399145 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-stkxt" Mar 12 21:25:52.434319 master-0 kubenswrapper[31456]: I0312 21:25:52.434229 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb965499f-tgbww"] Mar 12 21:25:52.436440 master-0 kubenswrapper[31456]: I0312 21:25:52.436078 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.444432 master-0 kubenswrapper[31456]: I0312 21:25:52.444131 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb965499f-tgbww"] Mar 12 21:25:52.467866 master-0 kubenswrapper[31456]: I0312 21:25:52.467214 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-tbph7"] Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.478573 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q45cl\" (UniqueName: \"kubernetes.io/projected/7e4d47e6-8dce-4f45-bcff-c57b548bb699-kube-api-access-q45cl\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.478632 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-svc\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.478674 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-nb\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.478709 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-swift-storage-0\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.478782 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-sb\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.478800 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-config\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.481874 master-0 kubenswrapper[31456]: I0312 21:25:52.479662 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-config\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.485447 master-0 kubenswrapper[31456]: I0312 21:25:52.485217 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-svc\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.485907 master-0 kubenswrapper[31456]: I0312 21:25:52.485758 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-nb\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.486354 master-0 kubenswrapper[31456]: I0312 21:25:52.486295 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-swift-storage-0\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.489999 master-0 kubenswrapper[31456]: I0312 21:25:52.486881 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-sb\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.570218 master-0 kubenswrapper[31456]: I0312 21:25:52.566502 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q45cl\" (UniqueName: \"kubernetes.io/projected/7e4d47e6-8dce-4f45-bcff-c57b548bb699-kube-api-access-q45cl\") pod \"dnsmasq-dns-8489b8449-72jp4\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:52.603243 master-0 kubenswrapper[31456]: I0312 21:25:52.603161 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p52vc\" (UniqueName: \"kubernetes.io/projected/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-kube-api-access-p52vc\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.603635 master-0 kubenswrapper[31456]: I0312 21:25:52.603249 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.603635 master-0 kubenswrapper[31456]: I0312 21:25:52.603289 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-config\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.603635 master-0 kubenswrapper[31456]: I0312 21:25:52.603326 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.603635 master-0 kubenswrapper[31456]: I0312 21:25:52.603362 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.603921 master-0 kubenswrapper[31456]: I0312 21:25:52.603657 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-svc\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.709551 master-0 kubenswrapper[31456]: I0312 21:25:52.709297 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-svc\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.744890 master-0 kubenswrapper[31456]: I0312 21:25:52.744778 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-svc\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.745102 master-0 kubenswrapper[31456]: I0312 21:25:52.745021 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p52vc\" (UniqueName: \"kubernetes.io/projected/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-kube-api-access-p52vc\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.745166 master-0 kubenswrapper[31456]: I0312 21:25:52.745130 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.745206 master-0 kubenswrapper[31456]: I0312 21:25:52.745192 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-config\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.746063 master-0 kubenswrapper[31456]: I0312 21:25:52.745265 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.746063 master-0 kubenswrapper[31456]: I0312 21:25:52.745347 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.747062 master-0 kubenswrapper[31456]: I0312 21:25:52.746192 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.747062 master-0 kubenswrapper[31456]: I0312 21:25:52.746989 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.754507 master-0 kubenswrapper[31456]: I0312 21:25:52.751841 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.789742 master-0 kubenswrapper[31456]: I0312 21:25:52.789500 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-config\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.816904 master-0 kubenswrapper[31456]: I0312 21:25:52.816487 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p52vc\" (UniqueName: \"kubernetes.io/projected/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-kube-api-access-p52vc\") pod \"dnsmasq-dns-7fb965499f-tgbww\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:52.934183 master-0 kubenswrapper[31456]: I0312 21:25:52.934057 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-tbph7" event={"ID":"c569c591-2b26-40b5-b7d0-139ad6d98ea3","Type":"ContainerStarted","Data":"27b368c645927c2511341d2d5ff02af032c001bd6162be3be407a69f2fb02895"} Mar 12 21:25:52.999488 master-0 kubenswrapper[31456]: I0312 21:25:52.999273 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6p46b" event={"ID":"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5","Type":"ContainerStarted","Data":"43908fb2f48712b220851bfeca566a58603e81c2cc16fc84de8b762f83d42080"} Mar 12 21:25:53.006563 master-0 kubenswrapper[31456]: I0312 21:25:53.004758 31456 generic.go:334] "Generic (PLEG): container finished" podID="4392b001-e025-49ef-8123-160f9e536da3" containerID="3739719f7c210ae4f90dc03aa66a1f670827da638e8e561d3e266b796dc94f9e" exitCode=0 Mar 12 21:25:53.006563 master-0 kubenswrapper[31456]: I0312 21:25:53.004855 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:53.006563 master-0 kubenswrapper[31456]: I0312 21:25:53.005581 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" event={"ID":"4392b001-e025-49ef-8123-160f9e536da3","Type":"ContainerDied","Data":"3739719f7c210ae4f90dc03aa66a1f670827da638e8e561d3e266b796dc94f9e"} Mar 12 21:25:53.019130 master-0 kubenswrapper[31456]: I0312 21:25:53.017504 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-db-sync-v8z2w"] Mar 12 21:25:53.038185 master-0 kubenswrapper[31456]: I0312 21:25:53.037917 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-31cc-account-create-update-pzkcd"] Mar 12 21:25:53.054385 master-0 kubenswrapper[31456]: I0312 21:25:53.053551 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6p46b" podStartSLOduration=4.053527942 podStartE2EDuration="4.053527942s" podCreationTimestamp="2026-03-12 21:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:25:53.029264025 +0000 UTC m=+1014.103869353" watchObservedRunningTime="2026-03-12 21:25:53.053527942 +0000 UTC m=+1014.128133270" Mar 12 21:25:53.121938 master-0 kubenswrapper[31456]: I0312 21:25:53.121367 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:53.170335 master-0 kubenswrapper[31456]: I0312 21:25:53.169255 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:53.302148 master-0 kubenswrapper[31456]: I0312 21:25:53.302107 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-svc\") pod \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " Mar 12 21:25:53.302246 master-0 kubenswrapper[31456]: I0312 21:25:53.302223 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-swift-storage-0\") pod \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " Mar 12 21:25:53.302296 master-0 kubenswrapper[31456]: I0312 21:25:53.302258 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q45cl\" (UniqueName: \"kubernetes.io/projected/7e4d47e6-8dce-4f45-bcff-c57b548bb699-kube-api-access-q45cl\") pod \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " Mar 12 21:25:53.302296 master-0 kubenswrapper[31456]: I0312 21:25:53.302288 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-config\") pod \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " Mar 12 21:25:53.302370 master-0 kubenswrapper[31456]: I0312 21:25:53.302316 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-sb\") pod \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " Mar 12 21:25:53.302370 master-0 kubenswrapper[31456]: I0312 21:25:53.302347 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-nb\") pod \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\" (UID: \"7e4d47e6-8dce-4f45-bcff-c57b548bb699\") " Mar 12 21:25:53.303468 master-0 kubenswrapper[31456]: I0312 21:25:53.303430 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e4d47e6-8dce-4f45-bcff-c57b548bb699" (UID: "7e4d47e6-8dce-4f45-bcff-c57b548bb699"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:53.305420 master-0 kubenswrapper[31456]: I0312 21:25:53.305374 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7e4d47e6-8dce-4f45-bcff-c57b548bb699" (UID: "7e4d47e6-8dce-4f45-bcff-c57b548bb699"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:53.305753 master-0 kubenswrapper[31456]: I0312 21:25:53.305728 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-config" (OuterVolumeSpecName: "config") pod "7e4d47e6-8dce-4f45-bcff-c57b548bb699" (UID: "7e4d47e6-8dce-4f45-bcff-c57b548bb699"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:53.305753 master-0 kubenswrapper[31456]: I0312 21:25:53.305744 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e4d47e6-8dce-4f45-bcff-c57b548bb699" (UID: "7e4d47e6-8dce-4f45-bcff-c57b548bb699"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:53.308754 master-0 kubenswrapper[31456]: I0312 21:25:53.306146 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e4d47e6-8dce-4f45-bcff-c57b548bb699" (UID: "7e4d47e6-8dce-4f45-bcff-c57b548bb699"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:53.310012 master-0 kubenswrapper[31456]: I0312 21:25:53.309978 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4d47e6-8dce-4f45-bcff-c57b548bb699-kube-api-access-q45cl" (OuterVolumeSpecName: "kube-api-access-q45cl") pod "7e4d47e6-8dce-4f45-bcff-c57b548bb699" (UID: "7e4d47e6-8dce-4f45-bcff-c57b548bb699"). InnerVolumeSpecName "kube-api-access-q45cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:53.316598 master-0 kubenswrapper[31456]: I0312 21:25:53.316555 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qs8v4"] Mar 12 21:25:53.425883 master-0 kubenswrapper[31456]: I0312 21:25:53.423997 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:53.425883 master-0 kubenswrapper[31456]: I0312 21:25:53.424041 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q45cl\" (UniqueName: \"kubernetes.io/projected/7e4d47e6-8dce-4f45-bcff-c57b548bb699-kube-api-access-q45cl\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:53.425883 master-0 kubenswrapper[31456]: I0312 21:25:53.424072 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:53.425883 master-0 kubenswrapper[31456]: I0312 21:25:53.424085 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:53.425883 master-0 kubenswrapper[31456]: I0312 21:25:53.424097 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:53.425883 master-0 kubenswrapper[31456]: I0312 21:25:53.424108 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e4d47e6-8dce-4f45-bcff-c57b548bb699-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:53.731939 master-0 kubenswrapper[31456]: I0312 21:25:53.723476 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-stkxt"] Mar 12 21:25:53.830961 master-0 kubenswrapper[31456]: I0312 21:25:53.823893 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb965499f-tgbww"] Mar 12 21:25:53.885828 master-0 kubenswrapper[31456]: I0312 21:25:53.885758 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:53.949508 master-0 kubenswrapper[31456]: I0312 21:25:53.949361 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-nb\") pod \"4392b001-e025-49ef-8123-160f9e536da3\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " Mar 12 21:25:53.949508 master-0 kubenswrapper[31456]: I0312 21:25:53.949483 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-config\") pod \"4392b001-e025-49ef-8123-160f9e536da3\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " Mar 12 21:25:53.950176 master-0 kubenswrapper[31456]: I0312 21:25:53.949578 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-svc\") pod \"4392b001-e025-49ef-8123-160f9e536da3\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " Mar 12 21:25:53.950176 master-0 kubenswrapper[31456]: I0312 21:25:53.949669 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-swift-storage-0\") pod \"4392b001-e025-49ef-8123-160f9e536da3\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " Mar 12 21:25:53.950176 master-0 kubenswrapper[31456]: I0312 21:25:53.949719 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-sb\") pod \"4392b001-e025-49ef-8123-160f9e536da3\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " Mar 12 21:25:53.950176 master-0 kubenswrapper[31456]: I0312 21:25:53.949748 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgn6q\" (UniqueName: \"kubernetes.io/projected/4392b001-e025-49ef-8123-160f9e536da3-kube-api-access-kgn6q\") pod \"4392b001-e025-49ef-8123-160f9e536da3\" (UID: \"4392b001-e025-49ef-8123-160f9e536da3\") " Mar 12 21:25:53.955285 master-0 kubenswrapper[31456]: I0312 21:25:53.955229 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4392b001-e025-49ef-8123-160f9e536da3-kube-api-access-kgn6q" (OuterVolumeSpecName: "kube-api-access-kgn6q") pod "4392b001-e025-49ef-8123-160f9e536da3" (UID: "4392b001-e025-49ef-8123-160f9e536da3"). InnerVolumeSpecName "kube-api-access-kgn6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:53.985405 master-0 kubenswrapper[31456]: I0312 21:25:53.985330 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4392b001-e025-49ef-8123-160f9e536da3" (UID: "4392b001-e025-49ef-8123-160f9e536da3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:54.002999 master-0 kubenswrapper[31456]: I0312 21:25:54.002941 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:54.003454 master-0 kubenswrapper[31456]: E0312 21:25:54.003419 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4392b001-e025-49ef-8123-160f9e536da3" containerName="init" Mar 12 21:25:54.003454 master-0 kubenswrapper[31456]: I0312 21:25:54.003436 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4392b001-e025-49ef-8123-160f9e536da3" containerName="init" Mar 12 21:25:54.003907 master-0 kubenswrapper[31456]: I0312 21:25:54.003684 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4392b001-e025-49ef-8123-160f9e536da3" containerName="init" Mar 12 21:25:54.004422 master-0 kubenswrapper[31456]: I0312 21:25:54.004357 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4392b001-e025-49ef-8123-160f9e536da3" (UID: "4392b001-e025-49ef-8123-160f9e536da3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:54.004741 master-0 kubenswrapper[31456]: I0312 21:25:54.004705 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.009541 master-0 kubenswrapper[31456]: I0312 21:25:54.009398 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 21:25:54.012965 master-0 kubenswrapper[31456]: I0312 21:25:54.012233 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-external-config-data" Mar 12 21:25:54.017183 master-0 kubenswrapper[31456]: I0312 21:25:54.017104 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:54.020365 master-0 kubenswrapper[31456]: I0312 21:25:54.020071 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" event={"ID":"dfcccd02-54d3-4d3c-ab23-4a94d72774b2","Type":"ContainerStarted","Data":"2da74ba708c3679ae6eb7bd863add43ee816ac1a7530ca5d3db711be1f8d4ee8"} Mar 12 21:25:54.034449 master-0 kubenswrapper[31456]: I0312 21:25:54.032925 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" Mar 12 21:25:54.034449 master-0 kubenswrapper[31456]: I0312 21:25:54.033252 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-868b5796f7-9rqnq" event={"ID":"4392b001-e025-49ef-8123-160f9e536da3","Type":"ContainerDied","Data":"44f522b21bba397a2de6211fb0a81af7350ad342f7a94e70200159ea9238a791"} Mar 12 21:25:54.034449 master-0 kubenswrapper[31456]: I0312 21:25:54.033312 31456 scope.go:117] "RemoveContainer" containerID="3739719f7c210ae4f90dc03aa66a1f670827da638e8e561d3e266b796dc94f9e" Mar 12 21:25:54.035712 master-0 kubenswrapper[31456]: I0312 21:25:54.035652 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-stkxt" event={"ID":"b466beef-2d58-41e2-b8cf-8090ab10be4e","Type":"ContainerStarted","Data":"943e54af7e968a6a8c4b70ab1d85c58a0e2a4bfdfc656ed65ee670bbfbb7d7dc"} Mar 12 21:25:54.058931 master-0 kubenswrapper[31456]: I0312 21:25:54.049426 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qs8v4" event={"ID":"fdf62a30-2c59-4043-99d7-b51fe604f823","Type":"ContainerStarted","Data":"fa51060e34dcf4d112ce1124184c5ff33b338b562fbf68e12f45476b0eda6c20"} Mar 12 21:25:54.058931 master-0 kubenswrapper[31456]: I0312 21:25:54.049484 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qs8v4" event={"ID":"fdf62a30-2c59-4043-99d7-b51fe604f823","Type":"ContainerStarted","Data":"224541f9aa782302ac73456f82b84c614df3c424954fd2545a50b2adf7660d0c"} Mar 12 21:25:54.058931 master-0 kubenswrapper[31456]: I0312 21:25:54.055521 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:54.058931 master-0 kubenswrapper[31456]: I0312 21:25:54.055563 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:54.058931 master-0 kubenswrapper[31456]: I0312 21:25:54.055575 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgn6q\" (UniqueName: \"kubernetes.io/projected/4392b001-e025-49ef-8123-160f9e536da3-kube-api-access-kgn6q\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:54.058931 master-0 kubenswrapper[31456]: I0312 21:25:54.057191 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-config" (OuterVolumeSpecName: "config") pod "4392b001-e025-49ef-8123-160f9e536da3" (UID: "4392b001-e025-49ef-8123-160f9e536da3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:54.061761 master-0 kubenswrapper[31456]: I0312 21:25:54.061713 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4392b001-e025-49ef-8123-160f9e536da3" (UID: "4392b001-e025-49ef-8123-160f9e536da3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:54.064849 master-0 kubenswrapper[31456]: I0312 21:25:54.064788 31456 generic.go:334] "Generic (PLEG): container finished" podID="c569c591-2b26-40b5-b7d0-139ad6d98ea3" containerID="14fff21a7d9dbf4a5984193139ace2fbeb2728de03ec2e9be2187e3c08ed0cf5" exitCode=0 Mar 12 21:25:54.064923 master-0 kubenswrapper[31456]: I0312 21:25:54.064871 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-tbph7" event={"ID":"c569c591-2b26-40b5-b7d0-139ad6d98ea3","Type":"ContainerDied","Data":"14fff21a7d9dbf4a5984193139ace2fbeb2728de03ec2e9be2187e3c08ed0cf5"} Mar 12 21:25:54.067672 master-0 kubenswrapper[31456]: I0312 21:25:54.067604 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-31cc-account-create-update-pzkcd" event={"ID":"dd24a59e-fd16-4b56-acb2-3129dab7977a","Type":"ContainerStarted","Data":"27bd3bfeda1473c6bb7069c8ab315a11b123a571a43726fa26ac4fc1249375d1"} Mar 12 21:25:54.067672 master-0 kubenswrapper[31456]: I0312 21:25:54.067655 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-31cc-account-create-update-pzkcd" event={"ID":"dd24a59e-fd16-4b56-acb2-3129dab7977a","Type":"ContainerStarted","Data":"de32e6e5161021436908fd17efa0596a9267b26cfc251a9f1378f21d591b8390"} Mar 12 21:25:54.079409 master-0 kubenswrapper[31456]: I0312 21:25:54.079304 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qs8v4" podStartSLOduration=3.079283249 podStartE2EDuration="3.079283249s" podCreationTimestamp="2026-03-12 21:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:25:54.071660535 +0000 UTC m=+1015.146265873" watchObservedRunningTime="2026-03-12 21:25:54.079283249 +0000 UTC m=+1015.153888577" Mar 12 21:25:54.084322 master-0 kubenswrapper[31456]: I0312 21:25:54.084276 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8489b8449-72jp4" Mar 12 21:25:54.085023 master-0 kubenswrapper[31456]: I0312 21:25:54.084751 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-db-sync-v8z2w" event={"ID":"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1","Type":"ContainerStarted","Data":"aef1a7e7a3c93adc9d9a5e903bd81dd04b52053d8c42e9f0ead8d496691cfd68"} Mar 12 21:25:54.111141 master-0 kubenswrapper[31456]: I0312 21:25:54.111091 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4392b001-e025-49ef-8123-160f9e536da3" (UID: "4392b001-e025-49ef-8123-160f9e536da3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:54.144261 master-0 kubenswrapper[31456]: I0312 21:25:54.144183 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-31cc-account-create-update-pzkcd" podStartSLOduration=3.144161719 podStartE2EDuration="3.144161719s" podCreationTimestamp="2026-03-12 21:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:25:54.130625792 +0000 UTC m=+1015.205231120" watchObservedRunningTime="2026-03-12 21:25:54.144161719 +0000 UTC m=+1015.218767037" Mar 12 21:25:54.168313 master-0 kubenswrapper[31456]: I0312 21:25:54.168232 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168514 master-0 kubenswrapper[31456]: I0312 21:25:54.168410 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwjb\" (UniqueName: \"kubernetes.io/projected/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-kube-api-access-4fwjb\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168680 master-0 kubenswrapper[31456]: I0312 21:25:54.168610 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168680 master-0 kubenswrapper[31456]: I0312 21:25:54.168648 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168760 master-0 kubenswrapper[31456]: I0312 21:25:54.168712 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168760 master-0 kubenswrapper[31456]: I0312 21:25:54.168744 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168869 master-0 kubenswrapper[31456]: I0312 21:25:54.168764 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.168903 master-0 kubenswrapper[31456]: I0312 21:25:54.168873 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:54.168903 master-0 kubenswrapper[31456]: I0312 21:25:54.168895 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:54.168960 master-0 kubenswrapper[31456]: I0312 21:25:54.168906 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4392b001-e025-49ef-8123-160f9e536da3-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:54.254854 master-0 kubenswrapper[31456]: I0312 21:25:54.254154 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8489b8449-72jp4"] Mar 12 21:25:54.260638 master-0 kubenswrapper[31456]: I0312 21:25:54.260564 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8489b8449-72jp4"] Mar 12 21:25:54.271189 master-0 kubenswrapper[31456]: I0312 21:25:54.271138 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwjb\" (UniqueName: \"kubernetes.io/projected/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-kube-api-access-4fwjb\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.271313 master-0 kubenswrapper[31456]: I0312 21:25:54.271215 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.271313 master-0 kubenswrapper[31456]: I0312 21:25:54.271248 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.271514 master-0 kubenswrapper[31456]: I0312 21:25:54.271457 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.271569 master-0 kubenswrapper[31456]: I0312 21:25:54.271513 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.271569 master-0 kubenswrapper[31456]: I0312 21:25:54.271530 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.272312 master-0 kubenswrapper[31456]: I0312 21:25:54.271940 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.272312 master-0 kubenswrapper[31456]: I0312 21:25:54.272249 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.272749 master-0 kubenswrapper[31456]: I0312 21:25:54.272606 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.276614 master-0 kubenswrapper[31456]: I0312 21:25:54.276563 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:25:54.276614 master-0 kubenswrapper[31456]: I0312 21:25:54.276607 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/43685901e29eb1cf6142e4c7db2bf2a74bc59e8789b390024af9a8010a27963c/globalmount\"" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.278158 master-0 kubenswrapper[31456]: I0312 21:25:54.278115 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.280246 master-0 kubenswrapper[31456]: I0312 21:25:54.280201 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.286550 master-0 kubenswrapper[31456]: I0312 21:25:54.286405 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.303895 master-0 kubenswrapper[31456]: I0312 21:25:54.303840 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:54.305966 master-0 kubenswrapper[31456]: I0312 21:25:54.305010 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwjb\" (UniqueName: \"kubernetes.io/projected/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-kube-api-access-4fwjb\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:54.322434 master-0 kubenswrapper[31456]: I0312 21:25:54.314895 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.322434 master-0 kubenswrapper[31456]: I0312 21:25:54.319925 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-internal-config-data" Mar 12 21:25:54.374498 master-0 kubenswrapper[31456]: I0312 21:25:54.371185 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:54.477617 master-0 kubenswrapper[31456]: I0312 21:25:54.477520 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.478045 master-0 kubenswrapper[31456]: I0312 21:25:54.477643 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.478045 master-0 kubenswrapper[31456]: I0312 21:25:54.477703 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.478045 master-0 kubenswrapper[31456]: I0312 21:25:54.477778 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.478045 master-0 kubenswrapper[31456]: I0312 21:25:54.477857 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.478045 master-0 kubenswrapper[31456]: I0312 21:25:54.477889 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97cjq\" (UniqueName: \"kubernetes.io/projected/db38f2fc-764f-46df-b914-096e168d8a8c-kube-api-access-97cjq\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.478045 master-0 kubenswrapper[31456]: I0312 21:25:54.477989 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.499186 master-0 kubenswrapper[31456]: I0312 21:25:54.499126 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-868b5796f7-9rqnq"] Mar 12 21:25:54.525858 master-0 kubenswrapper[31456]: I0312 21:25:54.522657 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-868b5796f7-9rqnq"] Mar 12 21:25:54.581269 master-0 kubenswrapper[31456]: I0312 21:25:54.581032 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.583330 master-0 kubenswrapper[31456]: I0312 21:25:54.581489 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.583330 master-0 kubenswrapper[31456]: I0312 21:25:54.581561 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.583330 master-0 kubenswrapper[31456]: I0312 21:25:54.581609 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.583330 master-0 kubenswrapper[31456]: I0312 21:25:54.581668 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.583330 master-0 kubenswrapper[31456]: I0312 21:25:54.581708 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.583330 master-0 kubenswrapper[31456]: I0312 21:25:54.581730 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97cjq\" (UniqueName: \"kubernetes.io/projected/db38f2fc-764f-46df-b914-096e168d8a8c-kube-api-access-97cjq\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.588018 master-0 kubenswrapper[31456]: I0312 21:25:54.585118 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.588018 master-0 kubenswrapper[31456]: I0312 21:25:54.586119 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.588018 master-0 kubenswrapper[31456]: I0312 21:25:54.586376 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:25:54.588018 master-0 kubenswrapper[31456]: I0312 21:25:54.586397 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.588018 master-0 kubenswrapper[31456]: I0312 21:25:54.586418 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3b47ef71cabc18af87317356c30c781b24b16858528acb95d991bfdc6fcfef3f/globalmount\"" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.590946 master-0 kubenswrapper[31456]: I0312 21:25:54.590230 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.594833 master-0 kubenswrapper[31456]: I0312 21:25:54.594363 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.614540 master-0 kubenswrapper[31456]: I0312 21:25:54.610780 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97cjq\" (UniqueName: \"kubernetes.io/projected/db38f2fc-764f-46df-b914-096e168d8a8c-kube-api-access-97cjq\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:54.925542 master-0 kubenswrapper[31456]: I0312 21:25:54.925410 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:54.929830 master-0 kubenswrapper[31456]: E0312 21:25:54.926321 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-30e4b-default-external-api-0" podUID="89ee4fa9-5d55-4cfd-b512-8ca49d17a947" Mar 12 21:25:55.062106 master-0 kubenswrapper[31456]: I0312 21:25:55.058447 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:55.068831 master-0 kubenswrapper[31456]: E0312 21:25:55.065860 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-30e4b-default-internal-api-0" podUID="db38f2fc-764f-46df-b914-096e168d8a8c" Mar 12 21:25:55.105739 master-0 kubenswrapper[31456]: I0312 21:25:55.105648 31456 generic.go:334] "Generic (PLEG): container finished" podID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerID="ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a" exitCode=0 Mar 12 21:25:55.105739 master-0 kubenswrapper[31456]: I0312 21:25:55.105735 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" event={"ID":"dfcccd02-54d3-4d3c-ab23-4a94d72774b2","Type":"ContainerDied","Data":"ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a"} Mar 12 21:25:55.109908 master-0 kubenswrapper[31456]: I0312 21:25:55.109854 31456 generic.go:334] "Generic (PLEG): container finished" podID="dd24a59e-fd16-4b56-acb2-3129dab7977a" containerID="27bd3bfeda1473c6bb7069c8ab315a11b123a571a43726fa26ac4fc1249375d1" exitCode=0 Mar 12 21:25:55.109989 master-0 kubenswrapper[31456]: I0312 21:25:55.109967 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:55.110140 master-0 kubenswrapper[31456]: I0312 21:25:55.110072 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-31cc-account-create-update-pzkcd" event={"ID":"dd24a59e-fd16-4b56-acb2-3129dab7977a","Type":"ContainerDied","Data":"27bd3bfeda1473c6bb7069c8ab315a11b123a571a43726fa26ac4fc1249375d1"} Mar 12 21:25:55.110200 master-0 kubenswrapper[31456]: I0312 21:25:55.110174 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:55.284406 master-0 kubenswrapper[31456]: I0312 21:25:55.284345 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:55.313975 master-0 kubenswrapper[31456]: I0312 21:25:55.305914 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4392b001-e025-49ef-8123-160f9e536da3" path="/var/lib/kubelet/pods/4392b001-e025-49ef-8123-160f9e536da3/volumes" Mar 12 21:25:55.313975 master-0 kubenswrapper[31456]: I0312 21:25:55.307946 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4d47e6-8dce-4f45-bcff-c57b548bb699" path="/var/lib/kubelet/pods/7e4d47e6-8dce-4f45-bcff-c57b548bb699/volumes" Mar 12 21:25:55.330938 master-0 kubenswrapper[31456]: I0312 21:25:55.330883 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:55.532982 master-0 kubenswrapper[31456]: I0312 21:25:55.532894 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-scripts\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:55.532982 master-0 kubenswrapper[31456]: I0312 21:25:55.532965 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-logs\") pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " Mar 12 21:25:55.533246 master-0 kubenswrapper[31456]: I0312 21:25:55.533034 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-config-data\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:55.533246 master-0 kubenswrapper[31456]: I0312 21:25:55.533065 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-logs\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:55.533889 master-0 kubenswrapper[31456]: I0312 21:25:55.533659 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-logs" (OuterVolumeSpecName: "logs") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:25:55.536440 master-0 kubenswrapper[31456]: I0312 21:25:55.536193 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-config-data\") pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " Mar 12 21:25:55.536526 master-0 kubenswrapper[31456]: I0312 21:25:55.536465 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-scripts\") pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " Mar 12 21:25:55.536616 master-0 kubenswrapper[31456]: I0312 21:25:55.536566 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fwjb\" (UniqueName: \"kubernetes.io/projected/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-kube-api-access-4fwjb\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:55.537692 master-0 kubenswrapper[31456]: I0312 21:25:55.536737 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97cjq\" (UniqueName: \"kubernetes.io/projected/db38f2fc-764f-46df-b914-096e168d8a8c-kube-api-access-97cjq\") pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " Mar 12 21:25:55.537692 master-0 kubenswrapper[31456]: I0312 21:25:55.536773 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-httpd-run\") pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " Mar 12 21:25:55.537692 master-0 kubenswrapper[31456]: I0312 21:25:55.536859 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-combined-ca-bundle\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:55.537692 master-0 kubenswrapper[31456]: I0312 21:25:55.536883 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-combined-ca-bundle\") pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" (UID: \"db38f2fc-764f-46df-b914-096e168d8a8c\") " Mar 12 21:25:55.537692 master-0 kubenswrapper[31456]: I0312 21:25:55.536908 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-httpd-run\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:55.537882 master-0 kubenswrapper[31456]: I0312 21:25:55.537721 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:25:55.542058 master-0 kubenswrapper[31456]: I0312 21:25:55.541389 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-scripts" (OuterVolumeSpecName: "scripts") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:55.542058 master-0 kubenswrapper[31456]: I0312 21:25:55.541744 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-logs" (OuterVolumeSpecName: "logs") pod "db38f2fc-764f-46df-b914-096e168d8a8c" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.543361 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-config-data" (OuterVolumeSpecName: "config-data") pod "db38f2fc-764f-46df-b914-096e168d8a8c" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.544757 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.544779 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.544909 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.544922 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.544935 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.544792 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-config-data" (OuterVolumeSpecName: "config-data") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.545358 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "db38f2fc-764f-46df-b914-096e168d8a8c" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:25:55.547931 master-0 kubenswrapper[31456]: I0312 21:25:55.547880 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:55.552966 master-0 kubenswrapper[31456]: I0312 21:25:55.552343 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-scripts" (OuterVolumeSpecName: "scripts") pod "db38f2fc-764f-46df-b914-096e168d8a8c" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:55.563507 master-0 kubenswrapper[31456]: I0312 21:25:55.562558 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:55.568093 master-0 kubenswrapper[31456]: I0312 21:25:55.565780 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db38f2fc-764f-46df-b914-096e168d8a8c" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:25:55.568213 master-0 kubenswrapper[31456]: I0312 21:25:55.568153 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-kube-api-access-4fwjb" (OuterVolumeSpecName: "kube-api-access-4fwjb") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "kube-api-access-4fwjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:55.578094 master-0 kubenswrapper[31456]: I0312 21:25:55.577970 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db38f2fc-764f-46df-b914-096e168d8a8c-kube-api-access-97cjq" (OuterVolumeSpecName: "kube-api-access-97cjq") pod "db38f2fc-764f-46df-b914-096e168d8a8c" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c"). InnerVolumeSpecName "kube-api-access-97cjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:55.647520 master-0 kubenswrapper[31456]: I0312 21:25:55.647456 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c569c591-2b26-40b5-b7d0-139ad6d98ea3-operator-scripts\") pod \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " Mar 12 21:25:55.649532 master-0 kubenswrapper[31456]: I0312 21:25:55.649485 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9jlr\" (UniqueName: \"kubernetes.io/projected/c569c591-2b26-40b5-b7d0-139ad6d98ea3-kube-api-access-q9jlr\") pod \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\" (UID: \"c569c591-2b26-40b5-b7d0-139ad6d98ea3\") " Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651331 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651492 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651508 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fwjb\" (UniqueName: \"kubernetes.io/projected/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-kube-api-access-4fwjb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651520 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97cjq\" (UniqueName: \"kubernetes.io/projected/db38f2fc-764f-46df-b914-096e168d8a8c-kube-api-access-97cjq\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651529 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db38f2fc-764f-46df-b914-096e168d8a8c-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651538 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ee4fa9-5d55-4cfd-b512-8ca49d17a947-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.651546 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db38f2fc-764f-46df-b914-096e168d8a8c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.655644 master-0 kubenswrapper[31456]: I0312 21:25:55.652837 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c569c591-2b26-40b5-b7d0-139ad6d98ea3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c569c591-2b26-40b5-b7d0-139ad6d98ea3" (UID: "c569c591-2b26-40b5-b7d0-139ad6d98ea3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:55.657763 master-0 kubenswrapper[31456]: I0312 21:25:55.657683 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c569c591-2b26-40b5-b7d0-139ad6d98ea3-kube-api-access-q9jlr" (OuterVolumeSpecName: "kube-api-access-q9jlr") pod "c569c591-2b26-40b5-b7d0-139ad6d98ea3" (UID: "c569c591-2b26-40b5-b7d0-139ad6d98ea3"). InnerVolumeSpecName "kube-api-access-q9jlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:55.753562 master-0 kubenswrapper[31456]: I0312 21:25:55.753431 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c569c591-2b26-40b5-b7d0-139ad6d98ea3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.753562 master-0 kubenswrapper[31456]: I0312 21:25:55.753543 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9jlr\" (UniqueName: \"kubernetes.io/projected/c569c591-2b26-40b5-b7d0-139ad6d98ea3-kube-api-access-q9jlr\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:55.877173 master-0 kubenswrapper[31456]: I0312 21:25:55.877128 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:55.956403 master-0 kubenswrapper[31456]: I0312 21:25:55.956343 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\" (UID: \"89ee4fa9-5d55-4cfd-b512-8ca49d17a947\") " Mar 12 21:25:56.120214 master-0 kubenswrapper[31456]: I0312 21:25:56.120151 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-tbph7" event={"ID":"c569c591-2b26-40b5-b7d0-139ad6d98ea3","Type":"ContainerDied","Data":"27b368c645927c2511341d2d5ff02af032c001bd6162be3be407a69f2fb02895"} Mar 12 21:25:56.120214 master-0 kubenswrapper[31456]: I0312 21:25:56.120203 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b368c645927c2511341d2d5ff02af032c001bd6162be3be407a69f2fb02895" Mar 12 21:25:56.120779 master-0 kubenswrapper[31456]: I0312 21:25:56.120251 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-tbph7" Mar 12 21:25:56.126429 master-0 kubenswrapper[31456]: I0312 21:25:56.126088 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.126429 master-0 kubenswrapper[31456]: I0312 21:25:56.126140 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" event={"ID":"dfcccd02-54d3-4d3c-ab23-4a94d72774b2","Type":"ContainerStarted","Data":"c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e"} Mar 12 21:25:56.126429 master-0 kubenswrapper[31456]: I0312 21:25:56.126219 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:25:56.126429 master-0 kubenswrapper[31456]: I0312 21:25:56.126172 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:56.539905 master-0 kubenswrapper[31456]: I0312 21:25:56.539680 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" podStartSLOduration=4.539660667 podStartE2EDuration="4.539660667s" podCreationTimestamp="2026-03-12 21:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:25:56.178085086 +0000 UTC m=+1017.252690414" watchObservedRunningTime="2026-03-12 21:25:56.539660667 +0000 UTC m=+1017.614265985" Mar 12 21:25:56.610269 master-0 kubenswrapper[31456]: I0312 21:25:56.610194 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:56.641831 master-0 kubenswrapper[31456]: I0312 21:25:56.638328 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:56.651821 master-0 kubenswrapper[31456]: I0312 21:25:56.647740 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:56.651821 master-0 kubenswrapper[31456]: E0312 21:25:56.649549 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c569c591-2b26-40b5-b7d0-139ad6d98ea3" containerName="mariadb-database-create" Mar 12 21:25:56.651821 master-0 kubenswrapper[31456]: I0312 21:25:56.649573 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c569c591-2b26-40b5-b7d0-139ad6d98ea3" containerName="mariadb-database-create" Mar 12 21:25:56.651821 master-0 kubenswrapper[31456]: I0312 21:25:56.649876 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c569c591-2b26-40b5-b7d0-139ad6d98ea3" containerName="mariadb-database-create" Mar 12 21:25:56.666830 master-0 kubenswrapper[31456]: I0312 21:25:56.662753 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.666830 master-0 kubenswrapper[31456]: I0312 21:25:56.666151 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-internal-config-data" Mar 12 21:25:56.692161 master-0 kubenswrapper[31456]: I0312 21:25:56.691659 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:25:56.827915 master-0 kubenswrapper[31456]: I0312 21:25:56.815082 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.827915 master-0 kubenswrapper[31456]: I0312 21:25:56.815134 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.827915 master-0 kubenswrapper[31456]: I0312 21:25:56.815155 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.827915 master-0 kubenswrapper[31456]: I0312 21:25:56.815193 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.827915 master-0 kubenswrapper[31456]: I0312 21:25:56.815307 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.827915 master-0 kubenswrapper[31456]: I0312 21:25:56.815330 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6md\" (UniqueName: \"kubernetes.io/projected/50627859-96f2-4a4c-9676-a086234b408c-kube-api-access-nf6md\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.918872 master-0 kubenswrapper[31456]: I0312 21:25:56.918500 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.918872 master-0 kubenswrapper[31456]: I0312 21:25:56.918637 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6md\" (UniqueName: \"kubernetes.io/projected/50627859-96f2-4a4c-9676-a086234b408c-kube-api-access-nf6md\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.919175 master-0 kubenswrapper[31456]: I0312 21:25:56.918944 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.919175 master-0 kubenswrapper[31456]: I0312 21:25:56.918989 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.919175 master-0 kubenswrapper[31456]: I0312 21:25:56.919004 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.919175 master-0 kubenswrapper[31456]: I0312 21:25:56.919058 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.921855 master-0 kubenswrapper[31456]: I0312 21:25:56.920270 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.921855 master-0 kubenswrapper[31456]: I0312 21:25:56.920368 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.924749 master-0 kubenswrapper[31456]: I0312 21:25:56.924716 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.926665 master-0 kubenswrapper[31456]: I0312 21:25:56.925588 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.926665 master-0 kubenswrapper[31456]: I0312 21:25:56.926608 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:56.946927 master-0 kubenswrapper[31456]: I0312 21:25:56.946837 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6md\" (UniqueName: \"kubernetes.io/projected/50627859-96f2-4a4c-9676-a086234b408c-kube-api-access-nf6md\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:57.092850 master-0 kubenswrapper[31456]: I0312 21:25:57.092734 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:57.151943 master-0 kubenswrapper[31456]: I0312 21:25:57.151898 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-31cc-account-create-update-pzkcd" Mar 12 21:25:57.152721 master-0 kubenswrapper[31456]: I0312 21:25:57.152074 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-31cc-account-create-update-pzkcd" event={"ID":"dd24a59e-fd16-4b56-acb2-3129dab7977a","Type":"ContainerDied","Data":"de32e6e5161021436908fd17efa0596a9267b26cfc251a9f1378f21d591b8390"} Mar 12 21:25:57.152721 master-0 kubenswrapper[31456]: I0312 21:25:57.152097 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de32e6e5161021436908fd17efa0596a9267b26cfc251a9f1378f21d591b8390" Mar 12 21:25:57.185784 master-0 kubenswrapper[31456]: I0312 21:25:57.185736 31456 kubelet_volumes.go:135] "Cleaned up orphaned volume from pod" podUID="db38f2fc-764f-46df-b914-096e168d8a8c" path="/var/lib/kubelet/pods/db38f2fc-764f-46df-b914-096e168d8a8c/volumes/kubernetes.io~csi/pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e/mount" Mar 12 21:25:57.186619 master-0 kubenswrapper[31456]: E0312 21:25:57.185995 31456 kubelet_volumes.go:263] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"db38f2fc-764f-46df-b914-096e168d8a8c\" found, but error occurred when trying to remove the volumes dir: not a directory" numErrs=1 Mar 12 21:25:57.228876 master-0 kubenswrapper[31456]: I0312 21:25:57.228820 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd24a59e-fd16-4b56-acb2-3129dab7977a-operator-scripts\") pod \"dd24a59e-fd16-4b56-acb2-3129dab7977a\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " Mar 12 21:25:57.229367 master-0 kubenswrapper[31456]: I0312 21:25:57.229339 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sn7v\" (UniqueName: \"kubernetes.io/projected/dd24a59e-fd16-4b56-acb2-3129dab7977a-kube-api-access-2sn7v\") pod \"dd24a59e-fd16-4b56-acb2-3129dab7977a\" (UID: \"dd24a59e-fd16-4b56-acb2-3129dab7977a\") " Mar 12 21:25:57.229511 master-0 kubenswrapper[31456]: I0312 21:25:57.229456 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd24a59e-fd16-4b56-acb2-3129dab7977a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd24a59e-fd16-4b56-acb2-3129dab7977a" (UID: "dd24a59e-fd16-4b56-acb2-3129dab7977a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:25:57.229978 master-0 kubenswrapper[31456]: I0312 21:25:57.229941 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd24a59e-fd16-4b56-acb2-3129dab7977a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:57.236183 master-0 kubenswrapper[31456]: I0312 21:25:57.236154 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd24a59e-fd16-4b56-acb2-3129dab7977a-kube-api-access-2sn7v" (OuterVolumeSpecName: "kube-api-access-2sn7v") pod "dd24a59e-fd16-4b56-acb2-3129dab7977a" (UID: "dd24a59e-fd16-4b56-acb2-3129dab7977a"). InnerVolumeSpecName "kube-api-access-2sn7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:25:57.331597 master-0 kubenswrapper[31456]: I0312 21:25:57.331547 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sn7v\" (UniqueName: \"kubernetes.io/projected/dd24a59e-fd16-4b56-acb2-3129dab7977a-kube-api-access-2sn7v\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:57.664188 master-0 kubenswrapper[31456]: E0312 21:25:57.664147 31456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9 podName: nodeName:}" failed. No retries permitted until 2026-03-12 21:25:58.164124412 +0000 UTC m=+1019.238729740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e" (UniqueName: "kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9") pod "glance-30e4b-default-internal-api-0" (UID: "db38f2fc-764f-46df-b914-096e168d8a8c") : rpc error: code = Internal desc = mount failed: volume=4b842188-a8b2-4def-ad0e-7cbb4053b9e9, error=mount failed: exit status 32 Mar 12 21:25:57.664188 master-0 kubenswrapper[31456]: Mounting command: mount Mar 12 21:25:57.664188 master-0 kubenswrapper[31456]: Mounting arguments: -t xfs -o nouuid,defaults /dev/local-storage/4b842188-a8b2-4def-ad0e-7cbb4053b9e9 /var/lib/kubelet/pods/db38f2fc-764f-46df-b914-096e168d8a8c/volumes/kubernetes.io~csi/pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e/mount Mar 12 21:25:57.664188 master-0 kubenswrapper[31456]: Output: mount: /var/lib/kubelet/pods/db38f2fc-764f-46df-b914-096e168d8a8c/volumes/kubernetes.io~csi/pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e/mount: mount point does not exist. Mar 12 21:25:57.704002 master-0 kubenswrapper[31456]: I0312 21:25:57.703964 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555" (OuterVolumeSpecName: "glance") pod "89ee4fa9-5d55-4cfd-b512-8ca49d17a947" (UID: "89ee4fa9-5d55-4cfd-b512-8ca49d17a947"). InnerVolumeSpecName "pvc-771d56ec-6f7c-4891-8052-556577fed26a". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 21:25:57.757791 master-0 kubenswrapper[31456]: I0312 21:25:57.757687 31456 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") on node \"master-0\" " Mar 12 21:25:57.792247 master-0 kubenswrapper[31456]: I0312 21:25:57.792188 31456 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 21:25:57.792445 master-0 kubenswrapper[31456]: I0312 21:25:57.792372 31456 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-771d56ec-6f7c-4891-8052-556577fed26a" (UniqueName: "kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555") on node "master-0" Mar 12 21:25:57.861095 master-0 kubenswrapper[31456]: I0312 21:25:57.861020 31456 reconciler_common.go:293] "Volume detached for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") on node \"master-0\" DevicePath \"\"" Mar 12 21:25:58.107750 master-0 kubenswrapper[31456]: I0312 21:25:58.107679 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:58.128604 master-0 kubenswrapper[31456]: I0312 21:25:58.128420 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:58.141096 master-0 kubenswrapper[31456]: I0312 21:25:58.141016 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:58.141912 master-0 kubenswrapper[31456]: E0312 21:25:58.141849 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd24a59e-fd16-4b56-acb2-3129dab7977a" containerName="mariadb-account-create-update" Mar 12 21:25:58.141912 master-0 kubenswrapper[31456]: I0312 21:25:58.141875 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd24a59e-fd16-4b56-acb2-3129dab7977a" containerName="mariadb-account-create-update" Mar 12 21:25:58.142368 master-0 kubenswrapper[31456]: I0312 21:25:58.142337 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd24a59e-fd16-4b56-acb2-3129dab7977a" containerName="mariadb-account-create-update" Mar 12 21:25:58.145106 master-0 kubenswrapper[31456]: I0312 21:25:58.144441 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.150079 master-0 kubenswrapper[31456]: I0312 21:25:58.150024 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-external-config-data" Mar 12 21:25:58.152027 master-0 kubenswrapper[31456]: I0312 21:25:58.151955 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:25:58.183142 master-0 kubenswrapper[31456]: I0312 21:25:58.183062 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:58.286549 master-0 kubenswrapper[31456]: I0312 21:25:58.285138 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.286549 master-0 kubenswrapper[31456]: I0312 21:25:58.285279 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.286549 master-0 kubenswrapper[31456]: I0312 21:25:58.285328 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.286549 master-0 kubenswrapper[31456]: I0312 21:25:58.285550 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.286549 master-0 kubenswrapper[31456]: I0312 21:25:58.285974 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.287519 master-0 kubenswrapper[31456]: I0312 21:25:58.287495 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.288051 master-0 kubenswrapper[31456]: I0312 21:25:58.287984 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt445\" (UniqueName: \"kubernetes.io/projected/b79438da-5595-4782-bbcb-e442d32bc206-kube-api-access-tt445\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.391084 master-0 kubenswrapper[31456]: I0312 21:25:58.390959 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.392080 master-0 kubenswrapper[31456]: I0312 21:25:58.392054 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.392245 master-0 kubenswrapper[31456]: I0312 21:25:58.392225 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.399627 master-0 kubenswrapper[31456]: I0312 21:25:58.399487 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.399993 master-0 kubenswrapper[31456]: I0312 21:25:58.399973 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.400221 master-0 kubenswrapper[31456]: I0312 21:25:58.400199 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.400386 master-0 kubenswrapper[31456]: I0312 21:25:58.400367 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt445\" (UniqueName: \"kubernetes.io/projected/b79438da-5595-4782-bbcb-e442d32bc206-kube-api-access-tt445\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.401153 master-0 kubenswrapper[31456]: I0312 21:25:58.392644 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.401319 master-0 kubenswrapper[31456]: I0312 21:25:58.399148 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.403375 master-0 kubenswrapper[31456]: I0312 21:25:58.401378 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.403375 master-0 kubenswrapper[31456]: I0312 21:25:58.401744 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:25:58.403375 master-0 kubenswrapper[31456]: I0312 21:25:58.402685 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/43685901e29eb1cf6142e4c7db2bf2a74bc59e8789b390024af9a8010a27963c/globalmount\"" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.409835 master-0 kubenswrapper[31456]: I0312 21:25:58.409739 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.412762 master-0 kubenswrapper[31456]: I0312 21:25:58.412615 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:58.433492 master-0 kubenswrapper[31456]: I0312 21:25:58.433429 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt445\" (UniqueName: \"kubernetes.io/projected/b79438da-5595-4782-bbcb-e442d32bc206-kube-api-access-tt445\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:59.037858 master-0 kubenswrapper[31456]: I0312 21:25:59.037768 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:59.146891 master-0 kubenswrapper[31456]: I0312 21:25:59.142532 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:25:59.204755 master-0 kubenswrapper[31456]: I0312 21:25:59.204692 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ee4fa9-5d55-4cfd-b512-8ca49d17a947" path="/var/lib/kubelet/pods/89ee4fa9-5d55-4cfd-b512-8ca49d17a947/volumes" Mar 12 21:25:59.205491 master-0 kubenswrapper[31456]: I0312 21:25:59.205113 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db38f2fc-764f-46df-b914-096e168d8a8c" path="/var/lib/kubelet/pods/db38f2fc-764f-46df-b914-096e168d8a8c/volumes" Mar 12 21:25:59.882306 master-0 kubenswrapper[31456]: I0312 21:25:59.881464 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:25:59.996860 master-0 kubenswrapper[31456]: I0312 21:25:59.996655 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:01.246335 master-0 kubenswrapper[31456]: I0312 21:26:01.244243 31456 generic.go:334] "Generic (PLEG): container finished" podID="7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" containerID="43908fb2f48712b220851bfeca566a58603e81c2cc16fc84de8b762f83d42080" exitCode=0 Mar 12 21:26:01.246335 master-0 kubenswrapper[31456]: I0312 21:26:01.244289 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6p46b" event={"ID":"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5","Type":"ContainerDied","Data":"43908fb2f48712b220851bfeca566a58603e81c2cc16fc84de8b762f83d42080"} Mar 12 21:26:01.249080 master-0 kubenswrapper[31456]: I0312 21:26:01.248846 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-stkxt" event={"ID":"b466beef-2d58-41e2-b8cf-8090ab10be4e","Type":"ContainerStarted","Data":"d3d4b81cfbe9c52aa0675e310f7bde029057b19974df5962fd9d18510851c37f"} Mar 12 21:26:01.291585 master-0 kubenswrapper[31456]: I0312 21:26:01.291491 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-stkxt" podStartSLOduration=3.241191197 podStartE2EDuration="10.291472025s" podCreationTimestamp="2026-03-12 21:25:51 +0000 UTC" firstStartedPulling="2026-03-12 21:25:53.800108742 +0000 UTC m=+1014.874714070" lastFinishedPulling="2026-03-12 21:26:00.85038957 +0000 UTC m=+1021.924994898" observedRunningTime="2026-03-12 21:26:01.28959888 +0000 UTC m=+1022.364204208" watchObservedRunningTime="2026-03-12 21:26:01.291472025 +0000 UTC m=+1022.366077353" Mar 12 21:26:01.373141 master-0 kubenswrapper[31456]: I0312 21:26:01.373057 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:01.436787 master-0 kubenswrapper[31456]: W0312 21:26:01.435441 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50627859_96f2_4a4c_9676_a086234b408c.slice/crio-5bb12fca68de7859566d6e5179f43211d2394dc88f1b585b5af67a27269d7920 WatchSource:0}: Error finding container 5bb12fca68de7859566d6e5179f43211d2394dc88f1b585b5af67a27269d7920: Status 404 returned error can't find the container with id 5bb12fca68de7859566d6e5179f43211d2394dc88f1b585b5af67a27269d7920 Mar 12 21:26:01.464919 master-0 kubenswrapper[31456]: I0312 21:26:01.461313 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:01.497987 master-0 kubenswrapper[31456]: I0312 21:26:01.497778 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:01.592837 master-0 kubenswrapper[31456]: W0312 21:26:01.590675 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb79438da_5595_4782_bbcb_e442d32bc206.slice/crio-93df963aa4af1c7be913acc37010c050bb4f6576f151f9cacbfc2b2438117225 WatchSource:0}: Error finding container 93df963aa4af1c7be913acc37010c050bb4f6576f151f9cacbfc2b2438117225: Status 404 returned error can't find the container with id 93df963aa4af1c7be913acc37010c050bb4f6576f151f9cacbfc2b2438117225 Mar 12 21:26:01.600839 master-0 kubenswrapper[31456]: I0312 21:26:01.600356 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:01.928846 master-0 kubenswrapper[31456]: I0312 21:26:01.927638 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-cf2v5"] Mar 12 21:26:01.954076 master-0 kubenswrapper[31456]: I0312 21:26:01.954001 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-cf2v5"] Mar 12 21:26:01.954200 master-0 kubenswrapper[31456]: I0312 21:26:01.954157 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:01.958327 master-0 kubenswrapper[31456]: I0312 21:26:01.958253 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 12 21:26:01.958573 master-0 kubenswrapper[31456]: I0312 21:26:01.958543 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 12 21:26:02.114844 master-0 kubenswrapper[31456]: I0312 21:26:02.114759 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-scripts\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.114923 master-0 kubenswrapper[31456]: I0312 21:26:02.114871 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.114964 master-0 kubenswrapper[31456]: I0312 21:26:02.114928 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-combined-ca-bundle\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.115669 master-0 kubenswrapper[31456]: I0312 21:26:02.115596 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf5mf\" (UniqueName: \"kubernetes.io/projected/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-kube-api-access-cf5mf\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.115724 master-0 kubenswrapper[31456]: I0312 21:26:02.115678 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-etc-podinfo\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.115790 master-0 kubenswrapper[31456]: I0312 21:26:02.115766 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data-merged\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.218314 master-0 kubenswrapper[31456]: I0312 21:26:02.218126 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-scripts\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.218314 master-0 kubenswrapper[31456]: I0312 21:26:02.218196 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.218787 master-0 kubenswrapper[31456]: I0312 21:26:02.218733 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-combined-ca-bundle\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.219702 master-0 kubenswrapper[31456]: I0312 21:26:02.219647 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf5mf\" (UniqueName: \"kubernetes.io/projected/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-kube-api-access-cf5mf\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.220282 master-0 kubenswrapper[31456]: I0312 21:26:02.220092 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-etc-podinfo\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.220347 master-0 kubenswrapper[31456]: I0312 21:26:02.220266 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data-merged\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.220777 master-0 kubenswrapper[31456]: I0312 21:26:02.220742 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data-merged\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.240567 master-0 kubenswrapper[31456]: I0312 21:26:02.240513 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-scripts\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.243766 master-0 kubenswrapper[31456]: I0312 21:26:02.243721 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf5mf\" (UniqueName: \"kubernetes.io/projected/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-kube-api-access-cf5mf\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.246783 master-0 kubenswrapper[31456]: I0312 21:26:02.244904 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-etc-podinfo\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.255212 master-0 kubenswrapper[31456]: I0312 21:26:02.247190 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.255212 master-0 kubenswrapper[31456]: I0312 21:26:02.254086 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-combined-ca-bundle\") pod \"ironic-db-sync-cf2v5\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.272105 master-0 kubenswrapper[31456]: I0312 21:26:02.270605 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"50627859-96f2-4a4c-9676-a086234b408c","Type":"ContainerStarted","Data":"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89"} Mar 12 21:26:02.272105 master-0 kubenswrapper[31456]: I0312 21:26:02.270660 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"50627859-96f2-4a4c-9676-a086234b408c","Type":"ContainerStarted","Data":"5bb12fca68de7859566d6e5179f43211d2394dc88f1b585b5af67a27269d7920"} Mar 12 21:26:02.273850 master-0 kubenswrapper[31456]: I0312 21:26:02.273632 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"b79438da-5595-4782-bbcb-e442d32bc206","Type":"ContainerStarted","Data":"93df963aa4af1c7be913acc37010c050bb4f6576f151f9cacbfc2b2438117225"} Mar 12 21:26:02.304471 master-0 kubenswrapper[31456]: I0312 21:26:02.304407 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:02.731423 master-0 kubenswrapper[31456]: I0312 21:26:02.731354 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:26:02.845215 master-0 kubenswrapper[31456]: I0312 21:26:02.843354 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxk2x\" (UniqueName: \"kubernetes.io/projected/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-kube-api-access-gxk2x\") pod \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " Mar 12 21:26:02.845215 master-0 kubenswrapper[31456]: I0312 21:26:02.843741 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-fernet-keys\") pod \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " Mar 12 21:26:02.845215 master-0 kubenswrapper[31456]: I0312 21:26:02.843986 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-scripts\") pod \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " Mar 12 21:26:02.845215 master-0 kubenswrapper[31456]: I0312 21:26:02.844038 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-combined-ca-bundle\") pod \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " Mar 12 21:26:02.845215 master-0 kubenswrapper[31456]: I0312 21:26:02.844162 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-credential-keys\") pod \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " Mar 12 21:26:02.845215 master-0 kubenswrapper[31456]: I0312 21:26:02.844204 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-config-data\") pod \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\" (UID: \"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5\") " Mar 12 21:26:02.851019 master-0 kubenswrapper[31456]: I0312 21:26:02.850940 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" (UID: "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:02.852758 master-0 kubenswrapper[31456]: I0312 21:26:02.852603 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-scripts" (OuterVolumeSpecName: "scripts") pod "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" (UID: "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:02.854787 master-0 kubenswrapper[31456]: I0312 21:26:02.854730 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-kube-api-access-gxk2x" (OuterVolumeSpecName: "kube-api-access-gxk2x") pod "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" (UID: "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5"). InnerVolumeSpecName "kube-api-access-gxk2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:02.858416 master-0 kubenswrapper[31456]: I0312 21:26:02.858318 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" (UID: "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:02.873894 master-0 kubenswrapper[31456]: I0312 21:26:02.873202 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" (UID: "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:02.894964 master-0 kubenswrapper[31456]: I0312 21:26:02.894892 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-config-data" (OuterVolumeSpecName: "config-data") pod "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" (UID: "7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:02.947679 master-0 kubenswrapper[31456]: I0312 21:26:02.947627 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:02.948078 master-0 kubenswrapper[31456]: I0312 21:26:02.948054 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:02.948422 master-0 kubenswrapper[31456]: I0312 21:26:02.948409 31456 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:02.948627 master-0 kubenswrapper[31456]: I0312 21:26:02.948555 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:02.948973 master-0 kubenswrapper[31456]: I0312 21:26:02.948959 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxk2x\" (UniqueName: \"kubernetes.io/projected/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-kube-api-access-gxk2x\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:02.949056 master-0 kubenswrapper[31456]: I0312 21:26:02.949045 31456 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:02.960362 master-0 kubenswrapper[31456]: I0312 21:26:02.960302 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-cf2v5"] Mar 12 21:26:02.965709 master-0 kubenswrapper[31456]: W0312 21:26:02.962559 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64b63a16_1c32_45a8_92f8_8ce00c2c6be8.slice/crio-66d13ea081c5deead09abb4d9389a8705082d23c82aa7fe9fd83539d181ca424 WatchSource:0}: Error finding container 66d13ea081c5deead09abb4d9389a8705082d23c82aa7fe9fd83539d181ca424: Status 404 returned error can't find the container with id 66d13ea081c5deead09abb4d9389a8705082d23c82aa7fe9fd83539d181ca424 Mar 12 21:26:03.125087 master-0 kubenswrapper[31456]: I0312 21:26:03.125048 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:26:03.237071 master-0 kubenswrapper[31456]: I0312 21:26:03.235265 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5484f4d7-grz9n"] Mar 12 21:26:03.237071 master-0 kubenswrapper[31456]: I0312 21:26:03.235516 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" containerName="dnsmasq-dns" containerID="cri-o://0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4" gracePeriod=10 Mar 12 21:26:03.301316 master-0 kubenswrapper[31456]: I0312 21:26:03.301258 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"b79438da-5595-4782-bbcb-e442d32bc206","Type":"ContainerStarted","Data":"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523"} Mar 12 21:26:03.304409 master-0 kubenswrapper[31456]: I0312 21:26:03.304389 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6p46b" event={"ID":"7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5","Type":"ContainerDied","Data":"487f463d99d9e4b11c45f9fb3ea4f09a66e08ed05017b7b7f1cd8fecdfca52c9"} Mar 12 21:26:03.304508 master-0 kubenswrapper[31456]: I0312 21:26:03.304495 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="487f463d99d9e4b11c45f9fb3ea4f09a66e08ed05017b7b7f1cd8fecdfca52c9" Mar 12 21:26:03.304607 master-0 kubenswrapper[31456]: I0312 21:26:03.304447 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6p46b" Mar 12 21:26:03.306134 master-0 kubenswrapper[31456]: I0312 21:26:03.306082 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-cf2v5" event={"ID":"64b63a16-1c32-45a8-92f8-8ce00c2c6be8","Type":"ContainerStarted","Data":"66d13ea081c5deead09abb4d9389a8705082d23c82aa7fe9fd83539d181ca424"} Mar 12 21:26:03.339363 master-0 kubenswrapper[31456]: I0312 21:26:03.331835 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"50627859-96f2-4a4c-9676-a086234b408c","Type":"ContainerStarted","Data":"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2"} Mar 12 21:26:03.339363 master-0 kubenswrapper[31456]: I0312 21:26:03.331994 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-internal-api-0" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-log" containerID="cri-o://be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89" gracePeriod=30 Mar 12 21:26:03.339363 master-0 kubenswrapper[31456]: I0312 21:26:03.332124 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-internal-api-0" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-httpd" containerID="cri-o://772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2" gracePeriod=30 Mar 12 21:26:03.367408 master-0 kubenswrapper[31456]: I0312 21:26:03.367317 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-30e4b-default-internal-api-0" podStartSLOduration=7.367298317 podStartE2EDuration="7.367298317s" podCreationTimestamp="2026-03-12 21:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:03.3633263 +0000 UTC m=+1024.437931628" watchObservedRunningTime="2026-03-12 21:26:03.367298317 +0000 UTC m=+1024.441903645" Mar 12 21:26:03.461261 master-0 kubenswrapper[31456]: I0312 21:26:03.461120 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6p46b"] Mar 12 21:26:03.470934 master-0 kubenswrapper[31456]: I0312 21:26:03.470863 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6p46b"] Mar 12 21:26:03.600893 master-0 kubenswrapper[31456]: I0312 21:26:03.600847 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sdtkg"] Mar 12 21:26:03.602650 master-0 kubenswrapper[31456]: E0312 21:26:03.601681 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" containerName="keystone-bootstrap" Mar 12 21:26:03.602650 master-0 kubenswrapper[31456]: I0312 21:26:03.601705 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" containerName="keystone-bootstrap" Mar 12 21:26:03.602650 master-0 kubenswrapper[31456]: I0312 21:26:03.602008 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" containerName="keystone-bootstrap" Mar 12 21:26:03.602825 master-0 kubenswrapper[31456]: I0312 21:26:03.602734 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.607298 master-0 kubenswrapper[31456]: I0312 21:26:03.606900 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 21:26:03.607298 master-0 kubenswrapper[31456]: I0312 21:26:03.607093 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 21:26:03.607298 master-0 kubenswrapper[31456]: I0312 21:26:03.607202 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 21:26:03.653944 master-0 kubenswrapper[31456]: I0312 21:26:03.646443 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sdtkg"] Mar 12 21:26:03.682213 master-0 kubenswrapper[31456]: I0312 21:26:03.682067 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-scripts\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.682213 master-0 kubenswrapper[31456]: I0312 21:26:03.682158 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdc5x\" (UniqueName: \"kubernetes.io/projected/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-kube-api-access-qdc5x\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.682213 master-0 kubenswrapper[31456]: I0312 21:26:03.682229 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-credential-keys\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.682213 master-0 kubenswrapper[31456]: I0312 21:26:03.682251 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-combined-ca-bundle\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.682213 master-0 kubenswrapper[31456]: I0312 21:26:03.682356 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-fernet-keys\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.682213 master-0 kubenswrapper[31456]: I0312 21:26:03.683064 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-config-data\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.786275 master-0 kubenswrapper[31456]: I0312 21:26:03.786209 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-fernet-keys\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.786596 master-0 kubenswrapper[31456]: I0312 21:26:03.786281 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-config-data\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.786596 master-0 kubenswrapper[31456]: I0312 21:26:03.786351 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-scripts\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.786596 master-0 kubenswrapper[31456]: I0312 21:26:03.786383 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdc5x\" (UniqueName: \"kubernetes.io/projected/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-kube-api-access-qdc5x\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.786596 master-0 kubenswrapper[31456]: I0312 21:26:03.786420 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-credential-keys\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.786596 master-0 kubenswrapper[31456]: I0312 21:26:03.786439 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-combined-ca-bundle\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.794633 master-0 kubenswrapper[31456]: I0312 21:26:03.790938 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-combined-ca-bundle\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.799031 master-0 kubenswrapper[31456]: I0312 21:26:03.797602 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-fernet-keys\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.802436 master-0 kubenswrapper[31456]: I0312 21:26:03.802215 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-scripts\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.817442 master-0 kubenswrapper[31456]: I0312 21:26:03.817394 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-config-data\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.827097 master-0 kubenswrapper[31456]: I0312 21:26:03.827040 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-credential-keys\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.863938 master-0 kubenswrapper[31456]: I0312 21:26:03.857281 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdc5x\" (UniqueName: \"kubernetes.io/projected/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-kube-api-access-qdc5x\") pod \"keystone-bootstrap-sdtkg\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:03.942912 master-0 kubenswrapper[31456]: I0312 21:26:03.936643 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:04.071341 master-0 kubenswrapper[31456]: I0312 21:26:04.064224 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:26:04.197162 master-0 kubenswrapper[31456]: I0312 21:26:04.197104 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.214258 master-0 kubenswrapper[31456]: I0312 21:26:04.214195 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvfm\" (UniqueName: \"kubernetes.io/projected/2b554cc7-1556-47ef-8167-8661aa141e10-kube-api-access-sjvfm\") pod \"2b554cc7-1556-47ef-8167-8661aa141e10\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " Mar 12 21:26:04.214470 master-0 kubenswrapper[31456]: I0312 21:26:04.214398 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-sb\") pod \"2b554cc7-1556-47ef-8167-8661aa141e10\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " Mar 12 21:26:04.214470 master-0 kubenswrapper[31456]: I0312 21:26:04.214445 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-nb\") pod \"2b554cc7-1556-47ef-8167-8661aa141e10\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " Mar 12 21:26:04.214626 master-0 kubenswrapper[31456]: I0312 21:26:04.214592 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-config\") pod \"2b554cc7-1556-47ef-8167-8661aa141e10\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " Mar 12 21:26:04.214666 master-0 kubenswrapper[31456]: I0312 21:26:04.214638 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-swift-storage-0\") pod \"2b554cc7-1556-47ef-8167-8661aa141e10\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " Mar 12 21:26:04.214700 master-0 kubenswrapper[31456]: I0312 21:26:04.214679 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-svc\") pod \"2b554cc7-1556-47ef-8167-8661aa141e10\" (UID: \"2b554cc7-1556-47ef-8167-8661aa141e10\") " Mar 12 21:26:04.247087 master-0 kubenswrapper[31456]: I0312 21:26:04.247004 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b554cc7-1556-47ef-8167-8661aa141e10-kube-api-access-sjvfm" (OuterVolumeSpecName: "kube-api-access-sjvfm") pod "2b554cc7-1556-47ef-8167-8661aa141e10" (UID: "2b554cc7-1556-47ef-8167-8661aa141e10"). InnerVolumeSpecName "kube-api-access-sjvfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:04.281990 master-0 kubenswrapper[31456]: I0312 21:26:04.281931 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b554cc7-1556-47ef-8167-8661aa141e10" (UID: "2b554cc7-1556-47ef-8167-8661aa141e10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:04.299029 master-0 kubenswrapper[31456]: I0312 21:26:04.298283 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b554cc7-1556-47ef-8167-8661aa141e10" (UID: "2b554cc7-1556-47ef-8167-8661aa141e10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:04.330051 master-0 kubenswrapper[31456]: I0312 21:26:04.329588 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-config" (OuterVolumeSpecName: "config") pod "2b554cc7-1556-47ef-8167-8661aa141e10" (UID: "2b554cc7-1556-47ef-8167-8661aa141e10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:04.331838 master-0 kubenswrapper[31456]: I0312 21:26:04.330488 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b554cc7-1556-47ef-8167-8661aa141e10" (UID: "2b554cc7-1556-47ef-8167-8661aa141e10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:04.341215 master-0 kubenswrapper[31456]: I0312 21:26:04.340945 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-scripts\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.341215 master-0 kubenswrapper[31456]: I0312 21:26:04.341009 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-combined-ca-bundle\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.341215 master-0 kubenswrapper[31456]: I0312 21:26:04.341042 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf6md\" (UniqueName: \"kubernetes.io/projected/50627859-96f2-4a4c-9676-a086234b408c-kube-api-access-nf6md\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.341215 master-0 kubenswrapper[31456]: I0312 21:26:04.341080 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-httpd-run\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.341428 master-0 kubenswrapper[31456]: I0312 21:26:04.341391 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-logs\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.341561 master-0 kubenswrapper[31456]: I0312 21:26:04.341530 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.341721 master-0 kubenswrapper[31456]: I0312 21:26:04.341694 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-config-data\") pod \"50627859-96f2-4a4c-9676-a086234b408c\" (UID: \"50627859-96f2-4a4c-9676-a086234b408c\") " Mar 12 21:26:04.342393 master-0 kubenswrapper[31456]: I0312 21:26:04.342312 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:04.344138 master-0 kubenswrapper[31456]: I0312 21:26:04.343771 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-logs" (OuterVolumeSpecName: "logs") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:04.344450 master-0 kubenswrapper[31456]: I0312 21:26:04.344423 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.344450 master-0 kubenswrapper[31456]: I0312 21:26:04.344447 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.344528 master-0 kubenswrapper[31456]: I0312 21:26:04.344463 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjvfm\" (UniqueName: \"kubernetes.io/projected/2b554cc7-1556-47ef-8167-8661aa141e10-kube-api-access-sjvfm\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.344528 master-0 kubenswrapper[31456]: I0312 21:26:04.344478 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.344528 master-0 kubenswrapper[31456]: I0312 21:26:04.344487 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50627859-96f2-4a4c-9676-a086234b408c-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.344528 master-0 kubenswrapper[31456]: I0312 21:26:04.344498 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.344528 master-0 kubenswrapper[31456]: I0312 21:26:04.344509 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.347334 master-0 kubenswrapper[31456]: I0312 21:26:04.347288 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-scripts" (OuterVolumeSpecName: "scripts") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:04.347457 master-0 kubenswrapper[31456]: I0312 21:26:04.347417 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b554cc7-1556-47ef-8167-8661aa141e10" (UID: "2b554cc7-1556-47ef-8167-8661aa141e10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:04.351173 master-0 kubenswrapper[31456]: I0312 21:26:04.348068 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50627859-96f2-4a4c-9676-a086234b408c-kube-api-access-nf6md" (OuterVolumeSpecName: "kube-api-access-nf6md") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "kube-api-access-nf6md". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:04.364087 master-0 kubenswrapper[31456]: I0312 21:26:04.363998 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"b79438da-5595-4782-bbcb-e442d32bc206","Type":"ContainerStarted","Data":"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b"} Mar 12 21:26:04.364937 master-0 kubenswrapper[31456]: I0312 21:26:04.364278 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-external-api-0" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-log" containerID="cri-o://1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523" gracePeriod=30 Mar 12 21:26:04.364937 master-0 kubenswrapper[31456]: I0312 21:26:04.364550 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-external-api-0" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-httpd" containerID="cri-o://6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b" gracePeriod=30 Mar 12 21:26:04.370080 master-0 kubenswrapper[31456]: I0312 21:26:04.368269 31456 generic.go:334] "Generic (PLEG): container finished" podID="2b554cc7-1556-47ef-8167-8661aa141e10" containerID="0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4" exitCode=0 Mar 12 21:26:04.370080 master-0 kubenswrapper[31456]: I0312 21:26:04.368392 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" event={"ID":"2b554cc7-1556-47ef-8167-8661aa141e10","Type":"ContainerDied","Data":"0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4"} Mar 12 21:26:04.370080 master-0 kubenswrapper[31456]: I0312 21:26:04.368433 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" event={"ID":"2b554cc7-1556-47ef-8167-8661aa141e10","Type":"ContainerDied","Data":"07b7342fa7c0b23946d5d10a249e15bef4729f4d26d8c4cf6aa02c99dd1515ab"} Mar 12 21:26:04.370080 master-0 kubenswrapper[31456]: I0312 21:26:04.368458 31456 scope.go:117] "RemoveContainer" containerID="0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4" Mar 12 21:26:04.370080 master-0 kubenswrapper[31456]: I0312 21:26:04.368754 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5484f4d7-grz9n" Mar 12 21:26:04.379345 master-0 kubenswrapper[31456]: I0312 21:26:04.379234 31456 generic.go:334] "Generic (PLEG): container finished" podID="50627859-96f2-4a4c-9676-a086234b408c" containerID="772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2" exitCode=0 Mar 12 21:26:04.379345 master-0 kubenswrapper[31456]: I0312 21:26:04.379277 31456 generic.go:334] "Generic (PLEG): container finished" podID="50627859-96f2-4a4c-9676-a086234b408c" containerID="be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89" exitCode=143 Mar 12 21:26:04.379345 master-0 kubenswrapper[31456]: I0312 21:26:04.379301 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"50627859-96f2-4a4c-9676-a086234b408c","Type":"ContainerDied","Data":"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2"} Mar 12 21:26:04.379345 master-0 kubenswrapper[31456]: I0312 21:26:04.379335 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"50627859-96f2-4a4c-9676-a086234b408c","Type":"ContainerDied","Data":"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89"} Mar 12 21:26:04.379345 master-0 kubenswrapper[31456]: I0312 21:26:04.379347 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"50627859-96f2-4a4c-9676-a086234b408c","Type":"ContainerDied","Data":"5bb12fca68de7859566d6e5179f43211d2394dc88f1b585b5af67a27269d7920"} Mar 12 21:26:04.379590 master-0 kubenswrapper[31456]: I0312 21:26:04.379433 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.393229 master-0 kubenswrapper[31456]: I0312 21:26:04.390432 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:04.402563 master-0 kubenswrapper[31456]: I0312 21:26:04.400886 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9" (OuterVolumeSpecName: "glance") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.449038 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-config-data" (OuterVolumeSpecName: "config-data") pod "50627859-96f2-4a4c-9676-a086234b408c" (UID: "50627859-96f2-4a4c-9676-a086234b408c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.450669 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b554cc7-1556-47ef-8167-8661aa141e10-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.450724 31456 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") on node \"master-0\" " Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.450740 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.450749 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.450759 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50627859-96f2-4a4c-9676-a086234b408c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.453931 master-0 kubenswrapper[31456]: I0312 21:26:04.450777 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf6md\" (UniqueName: \"kubernetes.io/projected/50627859-96f2-4a4c-9676-a086234b408c-kube-api-access-nf6md\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.456627 master-0 kubenswrapper[31456]: I0312 21:26:04.455505 31456 scope.go:117] "RemoveContainer" containerID="9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598" Mar 12 21:26:04.460514 master-0 kubenswrapper[31456]: I0312 21:26:04.460434 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-30e4b-default-external-api-0" podStartSLOduration=6.460404292 podStartE2EDuration="6.460404292s" podCreationTimestamp="2026-03-12 21:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:04.38962956 +0000 UTC m=+1025.464234888" watchObservedRunningTime="2026-03-12 21:26:04.460404292 +0000 UTC m=+1025.535009620" Mar 12 21:26:04.477553 master-0 kubenswrapper[31456]: I0312 21:26:04.477490 31456 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 21:26:04.477980 master-0 kubenswrapper[31456]: I0312 21:26:04.477954 31456 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e" (UniqueName: "kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9") on node "master-0" Mar 12 21:26:04.505473 master-0 kubenswrapper[31456]: I0312 21:26:04.505427 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5484f4d7-grz9n"] Mar 12 21:26:04.515707 master-0 kubenswrapper[31456]: I0312 21:26:04.515651 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5484f4d7-grz9n"] Mar 12 21:26:04.525179 master-0 kubenswrapper[31456]: I0312 21:26:04.525133 31456 scope.go:117] "RemoveContainer" containerID="0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4" Mar 12 21:26:04.525925 master-0 kubenswrapper[31456]: E0312 21:26:04.525897 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4\": container with ID starting with 0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4 not found: ID does not exist" containerID="0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4" Mar 12 21:26:04.526189 master-0 kubenswrapper[31456]: I0312 21:26:04.526161 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4"} err="failed to get container status \"0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4\": rpc error: code = NotFound desc = could not find container \"0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4\": container with ID starting with 0020efdbee720d4f1b99b182496c37daf188c0f949c7cc537f557803d3b3a7e4 not found: ID does not exist" Mar 12 21:26:04.526269 master-0 kubenswrapper[31456]: I0312 21:26:04.526258 31456 scope.go:117] "RemoveContainer" containerID="9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598" Mar 12 21:26:04.527651 master-0 kubenswrapper[31456]: E0312 21:26:04.527627 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598\": container with ID starting with 9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598 not found: ID does not exist" containerID="9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598" Mar 12 21:26:04.527718 master-0 kubenswrapper[31456]: I0312 21:26:04.527659 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598"} err="failed to get container status \"9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598\": rpc error: code = NotFound desc = could not find container \"9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598\": container with ID starting with 9cf72e7f4313cd47b75fdd5b942312a9f39047f8fa7813585b8f6d6b616e2598 not found: ID does not exist" Mar 12 21:26:04.527718 master-0 kubenswrapper[31456]: I0312 21:26:04.527680 31456 scope.go:117] "RemoveContainer" containerID="772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2" Mar 12 21:26:04.553264 master-0 kubenswrapper[31456]: I0312 21:26:04.553208 31456 reconciler_common.go:293] "Volume detached for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:04.592896 master-0 kubenswrapper[31456]: I0312 21:26:04.592865 31456 scope.go:117] "RemoveContainer" containerID="be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89" Mar 12 21:26:04.608040 master-0 kubenswrapper[31456]: I0312 21:26:04.599901 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sdtkg"] Mar 12 21:26:04.630513 master-0 kubenswrapper[31456]: I0312 21:26:04.630466 31456 scope.go:117] "RemoveContainer" containerID="772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2" Mar 12 21:26:04.631185 master-0 kubenswrapper[31456]: E0312 21:26:04.631128 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2\": container with ID starting with 772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2 not found: ID does not exist" containerID="772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2" Mar 12 21:26:04.631250 master-0 kubenswrapper[31456]: I0312 21:26:04.631196 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2"} err="failed to get container status \"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2\": rpc error: code = NotFound desc = could not find container \"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2\": container with ID starting with 772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2 not found: ID does not exist" Mar 12 21:26:04.631250 master-0 kubenswrapper[31456]: I0312 21:26:04.631228 31456 scope.go:117] "RemoveContainer" containerID="be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89" Mar 12 21:26:04.631951 master-0 kubenswrapper[31456]: E0312 21:26:04.631875 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89\": container with ID starting with be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89 not found: ID does not exist" containerID="be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89" Mar 12 21:26:04.632031 master-0 kubenswrapper[31456]: I0312 21:26:04.632004 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89"} err="failed to get container status \"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89\": rpc error: code = NotFound desc = could not find container \"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89\": container with ID starting with be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89 not found: ID does not exist" Mar 12 21:26:04.632075 master-0 kubenswrapper[31456]: I0312 21:26:04.632037 31456 scope.go:117] "RemoveContainer" containerID="772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2" Mar 12 21:26:04.632409 master-0 kubenswrapper[31456]: I0312 21:26:04.632366 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2"} err="failed to get container status \"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2\": rpc error: code = NotFound desc = could not find container \"772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2\": container with ID starting with 772c1c628b36f1d2dd066e89d41ba48a9f33494f7ec976d6d90a28804d43bce2 not found: ID does not exist" Mar 12 21:26:04.632409 master-0 kubenswrapper[31456]: I0312 21:26:04.632401 31456 scope.go:117] "RemoveContainer" containerID="be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89" Mar 12 21:26:04.633035 master-0 kubenswrapper[31456]: I0312 21:26:04.632986 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89"} err="failed to get container status \"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89\": rpc error: code = NotFound desc = could not find container \"be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89\": container with ID starting with be7e4ad00212c0ceea60ed00397554964a365c9ca0c1770d07c0207a85444a89 not found: ID does not exist" Mar 12 21:26:04.634909 master-0 kubenswrapper[31456]: W0312 21:26:04.634798 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3b65a8f_9787_4ff9_91bc_a35ef39781ce.slice/crio-93a0079b6445f2910c04e9157c1678067360d48eb54952b98096e6b51263d380 WatchSource:0}: Error finding container 93a0079b6445f2910c04e9157c1678067360d48eb54952b98096e6b51263d380: Status 404 returned error can't find the container with id 93a0079b6445f2910c04e9157c1678067360d48eb54952b98096e6b51263d380 Mar 12 21:26:04.776542 master-0 kubenswrapper[31456]: I0312 21:26:04.776169 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.784450 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.794057 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: E0312 21:26:04.794620 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" containerName="dnsmasq-dns" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.794635 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" containerName="dnsmasq-dns" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: E0312 21:26:04.794657 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-httpd" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.794663 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-httpd" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: E0312 21:26:04.794688 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-log" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.794694 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-log" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: E0312 21:26:04.794714 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" containerName="init" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.794720 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" containerName="init" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.794985 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" containerName="dnsmasq-dns" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.795026 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-log" Mar 12 21:26:04.795958 master-0 kubenswrapper[31456]: I0312 21:26:04.795037 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="50627859-96f2-4a4c-9676-a086234b408c" containerName="glance-httpd" Mar 12 21:26:04.796513 master-0 kubenswrapper[31456]: I0312 21:26:04.796293 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.804367 master-0 kubenswrapper[31456]: I0312 21:26:04.798621 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-internal-config-data" Mar 12 21:26:04.804367 master-0 kubenswrapper[31456]: I0312 21:26:04.800398 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 21:26:04.816849 master-0 kubenswrapper[31456]: I0312 21:26:04.815265 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:04.966717 master-0 kubenswrapper[31456]: I0312 21:26:04.966667 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.966872 master-0 kubenswrapper[31456]: I0312 21:26:04.966736 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.966872 master-0 kubenswrapper[31456]: I0312 21:26:04.966779 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.966872 master-0 kubenswrapper[31456]: I0312 21:26:04.966816 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwj5m\" (UniqueName: \"kubernetes.io/projected/a7a5e241-7146-489b-b32b-01218601b895-kube-api-access-fwj5m\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.966872 master-0 kubenswrapper[31456]: I0312 21:26:04.966871 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.967027 master-0 kubenswrapper[31456]: I0312 21:26:04.966944 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.967027 master-0 kubenswrapper[31456]: I0312 21:26:04.966975 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:04.967096 master-0 kubenswrapper[31456]: I0312 21:26:04.967028 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-internal-tls-certs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.068717 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.068867 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-internal-tls-certs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.068968 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.069011 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.069056 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.069096 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwj5m\" (UniqueName: \"kubernetes.io/projected/a7a5e241-7146-489b-b32b-01218601b895-kube-api-access-fwj5m\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.069207 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.071857 master-0 kubenswrapper[31456]: I0312 21:26:05.069360 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.081833 master-0 kubenswrapper[31456]: I0312 21:26:05.080829 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-internal-tls-certs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.081833 master-0 kubenswrapper[31456]: I0312 21:26:05.081012 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.081833 master-0 kubenswrapper[31456]: I0312 21:26:05.081207 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.103871 master-0 kubenswrapper[31456]: I0312 21:26:05.098927 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.103871 master-0 kubenswrapper[31456]: I0312 21:26:05.099671 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.103871 master-0 kubenswrapper[31456]: I0312 21:26:05.101428 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:26:05.103871 master-0 kubenswrapper[31456]: I0312 21:26:05.101473 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3b47ef71cabc18af87317356c30c781b24b16858528acb95d991bfdc6fcfef3f/globalmount\"" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.113482 master-0 kubenswrapper[31456]: I0312 21:26:05.113441 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwj5m\" (UniqueName: \"kubernetes.io/projected/a7a5e241-7146-489b-b32b-01218601b895-kube-api-access-fwj5m\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.114374 master-0 kubenswrapper[31456]: I0312 21:26:05.114346 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:05.187252 master-0 kubenswrapper[31456]: I0312 21:26:05.187198 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b554cc7-1556-47ef-8167-8661aa141e10" path="/var/lib/kubelet/pods/2b554cc7-1556-47ef-8167-8661aa141e10/volumes" Mar 12 21:26:05.188305 master-0 kubenswrapper[31456]: I0312 21:26:05.188268 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50627859-96f2-4a4c-9676-a086234b408c" path="/var/lib/kubelet/pods/50627859-96f2-4a4c-9676-a086234b408c/volumes" Mar 12 21:26:05.189676 master-0 kubenswrapper[31456]: I0312 21:26:05.189657 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5" path="/var/lib/kubelet/pods/7a9673ce-d1f7-4a48-99a3-79e0bb71d2e5/volumes" Mar 12 21:26:05.278828 master-0 kubenswrapper[31456]: I0312 21:26:05.276907 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377112 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377201 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-logs\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377268 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-httpd-run\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377362 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt445\" (UniqueName: \"kubernetes.io/projected/b79438da-5595-4782-bbcb-e442d32bc206-kube-api-access-tt445\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377527 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-combined-ca-bundle\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377587 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-config-data\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.378154 master-0 kubenswrapper[31456]: I0312 21:26:05.377652 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-scripts\") pod \"b79438da-5595-4782-bbcb-e442d32bc206\" (UID: \"b79438da-5595-4782-bbcb-e442d32bc206\") " Mar 12 21:26:05.379021 master-0 kubenswrapper[31456]: I0312 21:26:05.378967 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:05.379232 master-0 kubenswrapper[31456]: I0312 21:26:05.379181 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-logs" (OuterVolumeSpecName: "logs") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:05.382874 master-0 kubenswrapper[31456]: I0312 21:26:05.382581 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79438da-5595-4782-bbcb-e442d32bc206-kube-api-access-tt445" (OuterVolumeSpecName: "kube-api-access-tt445") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "kube-api-access-tt445". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:05.383100 master-0 kubenswrapper[31456]: I0312 21:26:05.382902 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-scripts" (OuterVolumeSpecName: "scripts") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:05.408360 master-0 kubenswrapper[31456]: I0312 21:26:05.407120 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdtkg" event={"ID":"d3b65a8f-9787-4ff9-91bc-a35ef39781ce","Type":"ContainerStarted","Data":"242fe52ac35236a90eedf0979b22b6148dd8cb3d2bc2da7d1e1ab1bcb1673c31"} Mar 12 21:26:05.408360 master-0 kubenswrapper[31456]: I0312 21:26:05.407187 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdtkg" event={"ID":"d3b65a8f-9787-4ff9-91bc-a35ef39781ce","Type":"ContainerStarted","Data":"93a0079b6445f2910c04e9157c1678067360d48eb54952b98096e6b51263d380"} Mar 12 21:26:05.408360 master-0 kubenswrapper[31456]: I0312 21:26:05.408300 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.412753 31456 generic.go:334] "Generic (PLEG): container finished" podID="b79438da-5595-4782-bbcb-e442d32bc206" containerID="6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b" exitCode=0 Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.412791 31456 generic.go:334] "Generic (PLEG): container finished" podID="b79438da-5595-4782-bbcb-e442d32bc206" containerID="1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523" exitCode=143 Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.412836 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"b79438da-5595-4782-bbcb-e442d32bc206","Type":"ContainerDied","Data":"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b"} Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.412868 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"b79438da-5595-4782-bbcb-e442d32bc206","Type":"ContainerDied","Data":"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523"} Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.412883 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"b79438da-5595-4782-bbcb-e442d32bc206","Type":"ContainerDied","Data":"93df963aa4af1c7be913acc37010c050bb4f6576f151f9cacbfc2b2438117225"} Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.412903 31456 scope.go:117] "RemoveContainer" containerID="6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b" Mar 12 21:26:05.413535 master-0 kubenswrapper[31456]: I0312 21:26:05.413054 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:05.433057 master-0 kubenswrapper[31456]: I0312 21:26:05.432943 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sdtkg" podStartSLOduration=2.432924021 podStartE2EDuration="2.432924021s" podCreationTimestamp="2026-03-12 21:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:05.424679181 +0000 UTC m=+1026.499284509" watchObservedRunningTime="2026-03-12 21:26:05.432924021 +0000 UTC m=+1026.507529349" Mar 12 21:26:05.451905 master-0 kubenswrapper[31456]: I0312 21:26:05.449011 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-config-data" (OuterVolumeSpecName: "config-data") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:05.466029 master-0 kubenswrapper[31456]: I0312 21:26:05.465796 31456 scope.go:117] "RemoveContainer" containerID="1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523" Mar 12 21:26:05.480269 master-0 kubenswrapper[31456]: I0312 21:26:05.480221 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:05.480269 master-0 kubenswrapper[31456]: I0312 21:26:05.480260 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:05.480269 master-0 kubenswrapper[31456]: I0312 21:26:05.480269 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b79438da-5595-4782-bbcb-e442d32bc206-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:05.480414 master-0 kubenswrapper[31456]: I0312 21:26:05.480278 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt445\" (UniqueName: \"kubernetes.io/projected/b79438da-5595-4782-bbcb-e442d32bc206-kube-api-access-tt445\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:05.480414 master-0 kubenswrapper[31456]: I0312 21:26:05.480289 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:05.480414 master-0 kubenswrapper[31456]: I0312 21:26:05.480298 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79438da-5595-4782-bbcb-e442d32bc206-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:05.490978 master-0 kubenswrapper[31456]: I0312 21:26:05.490941 31456 scope.go:117] "RemoveContainer" containerID="6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b" Mar 12 21:26:05.492479 master-0 kubenswrapper[31456]: E0312 21:26:05.492230 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b\": container with ID starting with 6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b not found: ID does not exist" containerID="6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b" Mar 12 21:26:05.492479 master-0 kubenswrapper[31456]: I0312 21:26:05.492298 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b"} err="failed to get container status \"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b\": rpc error: code = NotFound desc = could not find container \"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b\": container with ID starting with 6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b not found: ID does not exist" Mar 12 21:26:05.492479 master-0 kubenswrapper[31456]: I0312 21:26:05.492337 31456 scope.go:117] "RemoveContainer" containerID="1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523" Mar 12 21:26:05.492766 master-0 kubenswrapper[31456]: E0312 21:26:05.492726 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523\": container with ID starting with 1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523 not found: ID does not exist" containerID="1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523" Mar 12 21:26:05.492837 master-0 kubenswrapper[31456]: I0312 21:26:05.492762 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523"} err="failed to get container status \"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523\": rpc error: code = NotFound desc = could not find container \"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523\": container with ID starting with 1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523 not found: ID does not exist" Mar 12 21:26:05.492837 master-0 kubenswrapper[31456]: I0312 21:26:05.492784 31456 scope.go:117] "RemoveContainer" containerID="6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b" Mar 12 21:26:05.493056 master-0 kubenswrapper[31456]: I0312 21:26:05.493028 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b"} err="failed to get container status \"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b\": rpc error: code = NotFound desc = could not find container \"6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b\": container with ID starting with 6ebeb542f8627df31578c732bf383249799a5a24bd4d081d8d4dcdd11ed4ba6b not found: ID does not exist" Mar 12 21:26:05.493113 master-0 kubenswrapper[31456]: I0312 21:26:05.493057 31456 scope.go:117] "RemoveContainer" containerID="1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523" Mar 12 21:26:05.493279 master-0 kubenswrapper[31456]: I0312 21:26:05.493252 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523"} err="failed to get container status \"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523\": rpc error: code = NotFound desc = could not find container \"1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523\": container with ID starting with 1e72ea6cffe3cc757626386c1a5fe551ccae84da6ed166aeed3ce42895f92523 not found: ID does not exist" Mar 12 21:26:06.425784 master-0 kubenswrapper[31456]: I0312 21:26:06.425721 31456 generic.go:334] "Generic (PLEG): container finished" podID="b466beef-2d58-41e2-b8cf-8090ab10be4e" containerID="d3d4b81cfbe9c52aa0675e310f7bde029057b19974df5962fd9d18510851c37f" exitCode=0 Mar 12 21:26:06.426319 master-0 kubenswrapper[31456]: I0312 21:26:06.425868 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-stkxt" event={"ID":"b466beef-2d58-41e2-b8cf-8090ab10be4e","Type":"ContainerDied","Data":"d3d4b81cfbe9c52aa0675e310f7bde029057b19974df5962fd9d18510851c37f"} Mar 12 21:26:06.485712 master-0 kubenswrapper[31456]: I0312 21:26:06.485653 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:06.490747 master-0 kubenswrapper[31456]: I0312 21:26:06.490713 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555" (OuterVolumeSpecName: "glance") pod "b79438da-5595-4782-bbcb-e442d32bc206" (UID: "b79438da-5595-4782-bbcb-e442d32bc206"). InnerVolumeSpecName "pvc-771d56ec-6f7c-4891-8052-556577fed26a". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 21:26:06.504351 master-0 kubenswrapper[31456]: I0312 21:26:06.504211 31456 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") on node \"master-0\" " Mar 12 21:26:06.531574 master-0 kubenswrapper[31456]: I0312 21:26:06.531439 31456 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 21:26:06.531769 master-0 kubenswrapper[31456]: I0312 21:26:06.531598 31456 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-771d56ec-6f7c-4891-8052-556577fed26a" (UniqueName: "kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555") on node "master-0" Mar 12 21:26:06.606710 master-0 kubenswrapper[31456]: I0312 21:26:06.606641 31456 reconciler_common.go:293] "Volume detached for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:06.680361 master-0 kubenswrapper[31456]: I0312 21:26:06.679894 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:06.728030 master-0 kubenswrapper[31456]: I0312 21:26:06.721905 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:06.736964 master-0 kubenswrapper[31456]: I0312 21:26:06.736883 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:06.737625 master-0 kubenswrapper[31456]: E0312 21:26:06.737609 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-httpd" Mar 12 21:26:06.737693 master-0 kubenswrapper[31456]: I0312 21:26:06.737683 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-httpd" Mar 12 21:26:06.737864 master-0 kubenswrapper[31456]: E0312 21:26:06.737825 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-log" Mar 12 21:26:06.737955 master-0 kubenswrapper[31456]: I0312 21:26:06.737944 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-log" Mar 12 21:26:06.738254 master-0 kubenswrapper[31456]: I0312 21:26:06.738239 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-httpd" Mar 12 21:26:06.738373 master-0 kubenswrapper[31456]: I0312 21:26:06.738361 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79438da-5595-4782-bbcb-e442d32bc206" containerName="glance-log" Mar 12 21:26:06.742900 master-0 kubenswrapper[31456]: I0312 21:26:06.742243 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.747815 master-0 kubenswrapper[31456]: I0312 21:26:06.747753 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-external-config-data" Mar 12 21:26:06.750160 master-0 kubenswrapper[31456]: I0312 21:26:06.750120 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:06.757100 master-0 kubenswrapper[31456]: I0312 21:26:06.757045 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:06.759584 master-0 kubenswrapper[31456]: I0312 21:26:06.747956 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 21:26:06.812549 master-0 kubenswrapper[31456]: I0312 21:26:06.812463 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-public-tls-certs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.812756 master-0 kubenswrapper[31456]: I0312 21:26:06.812552 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82rzv\" (UniqueName: \"kubernetes.io/projected/35a5b367-8419-4864-9317-7b78c50cad2d-kube-api-access-82rzv\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.812756 master-0 kubenswrapper[31456]: I0312 21:26:06.812588 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.812878 master-0 kubenswrapper[31456]: I0312 21:26:06.812777 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.812949 master-0 kubenswrapper[31456]: I0312 21:26:06.812911 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.823573 master-0 kubenswrapper[31456]: I0312 21:26:06.823501 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.823740 master-0 kubenswrapper[31456]: I0312 21:26:06.823612 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.823740 master-0 kubenswrapper[31456]: I0312 21:26:06.823662 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.926695 master-0 kubenswrapper[31456]: I0312 21:26:06.926629 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-public-tls-certs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.926695 master-0 kubenswrapper[31456]: I0312 21:26:06.926704 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82rzv\" (UniqueName: \"kubernetes.io/projected/35a5b367-8419-4864-9317-7b78c50cad2d-kube-api-access-82rzv\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.926974 master-0 kubenswrapper[31456]: I0312 21:26:06.926726 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.927962 master-0 kubenswrapper[31456]: I0312 21:26:06.927038 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.927962 master-0 kubenswrapper[31456]: I0312 21:26:06.927466 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.927962 master-0 kubenswrapper[31456]: I0312 21:26:06.927774 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.928297 master-0 kubenswrapper[31456]: I0312 21:26:06.928149 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.928297 master-0 kubenswrapper[31456]: I0312 21:26:06.928208 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.928297 master-0 kubenswrapper[31456]: I0312 21:26:06.928252 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.928651 master-0 kubenswrapper[31456]: I0312 21:26:06.928519 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.935790 master-0 kubenswrapper[31456]: I0312 21:26:06.935745 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:26:06.935790 master-0 kubenswrapper[31456]: I0312 21:26:06.935782 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/43685901e29eb1cf6142e4c7db2bf2a74bc59e8789b390024af9a8010a27963c/globalmount\"" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.942157 master-0 kubenswrapper[31456]: I0312 21:26:06.941765 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.942301 master-0 kubenswrapper[31456]: I0312 21:26:06.942164 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.942301 master-0 kubenswrapper[31456]: I0312 21:26:06.942164 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.942780 master-0 kubenswrapper[31456]: I0312 21:26:06.942659 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-public-tls-certs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:06.953041 master-0 kubenswrapper[31456]: I0312 21:26:06.952981 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82rzv\" (UniqueName: \"kubernetes.io/projected/35a5b367-8419-4864-9317-7b78c50cad2d-kube-api-access-82rzv\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:07.186041 master-0 kubenswrapper[31456]: I0312 21:26:07.185962 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79438da-5595-4782-bbcb-e442d32bc206" path="/var/lib/kubelet/pods/b79438da-5595-4782-bbcb-e442d32bc206/volumes" Mar 12 21:26:08.343434 master-0 kubenswrapper[31456]: I0312 21:26:08.343377 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:08.609516 master-0 kubenswrapper[31456]: I0312 21:26:08.609386 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:10.894977 master-0 kubenswrapper[31456]: I0312 21:26:10.894899 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-stkxt" Mar 12 21:26:10.981420 master-0 kubenswrapper[31456]: I0312 21:26:10.981046 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-scripts\") pod \"b466beef-2d58-41e2-b8cf-8090ab10be4e\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " Mar 12 21:26:10.981420 master-0 kubenswrapper[31456]: I0312 21:26:10.981278 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk4zl\" (UniqueName: \"kubernetes.io/projected/b466beef-2d58-41e2-b8cf-8090ab10be4e-kube-api-access-vk4zl\") pod \"b466beef-2d58-41e2-b8cf-8090ab10be4e\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " Mar 12 21:26:10.981420 master-0 kubenswrapper[31456]: I0312 21:26:10.981374 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-combined-ca-bundle\") pod \"b466beef-2d58-41e2-b8cf-8090ab10be4e\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " Mar 12 21:26:10.981706 master-0 kubenswrapper[31456]: I0312 21:26:10.981469 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-config-data\") pod \"b466beef-2d58-41e2-b8cf-8090ab10be4e\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " Mar 12 21:26:10.981706 master-0 kubenswrapper[31456]: I0312 21:26:10.981560 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b466beef-2d58-41e2-b8cf-8090ab10be4e-logs\") pod \"b466beef-2d58-41e2-b8cf-8090ab10be4e\" (UID: \"b466beef-2d58-41e2-b8cf-8090ab10be4e\") " Mar 12 21:26:10.982552 master-0 kubenswrapper[31456]: I0312 21:26:10.982516 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b466beef-2d58-41e2-b8cf-8090ab10be4e-logs" (OuterVolumeSpecName: "logs") pod "b466beef-2d58-41e2-b8cf-8090ab10be4e" (UID: "b466beef-2d58-41e2-b8cf-8090ab10be4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:10.988013 master-0 kubenswrapper[31456]: I0312 21:26:10.987940 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b466beef-2d58-41e2-b8cf-8090ab10be4e-kube-api-access-vk4zl" (OuterVolumeSpecName: "kube-api-access-vk4zl") pod "b466beef-2d58-41e2-b8cf-8090ab10be4e" (UID: "b466beef-2d58-41e2-b8cf-8090ab10be4e"). InnerVolumeSpecName "kube-api-access-vk4zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:11.001422 master-0 kubenswrapper[31456]: I0312 21:26:11.001363 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-scripts" (OuterVolumeSpecName: "scripts") pod "b466beef-2d58-41e2-b8cf-8090ab10be4e" (UID: "b466beef-2d58-41e2-b8cf-8090ab10be4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:11.018569 master-0 kubenswrapper[31456]: I0312 21:26:11.018431 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-config-data" (OuterVolumeSpecName: "config-data") pod "b466beef-2d58-41e2-b8cf-8090ab10be4e" (UID: "b466beef-2d58-41e2-b8cf-8090ab10be4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:11.022111 master-0 kubenswrapper[31456]: I0312 21:26:11.021954 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b466beef-2d58-41e2-b8cf-8090ab10be4e" (UID: "b466beef-2d58-41e2-b8cf-8090ab10be4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:11.085102 master-0 kubenswrapper[31456]: I0312 21:26:11.085036 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk4zl\" (UniqueName: \"kubernetes.io/projected/b466beef-2d58-41e2-b8cf-8090ab10be4e-kube-api-access-vk4zl\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:11.085102 master-0 kubenswrapper[31456]: I0312 21:26:11.085095 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:11.085102 master-0 kubenswrapper[31456]: I0312 21:26:11.085109 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:11.085445 master-0 kubenswrapper[31456]: I0312 21:26:11.085120 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b466beef-2d58-41e2-b8cf-8090ab10be4e-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:11.085445 master-0 kubenswrapper[31456]: I0312 21:26:11.085133 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b466beef-2d58-41e2-b8cf-8090ab10be4e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:11.509326 master-0 kubenswrapper[31456]: I0312 21:26:11.509251 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-stkxt" event={"ID":"b466beef-2d58-41e2-b8cf-8090ab10be4e","Type":"ContainerDied","Data":"943e54af7e968a6a8c4b70ab1d85c58a0e2a4bfdfc656ed65ee670bbfbb7d7dc"} Mar 12 21:26:11.509326 master-0 kubenswrapper[31456]: I0312 21:26:11.509308 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943e54af7e968a6a8c4b70ab1d85c58a0e2a4bfdfc656ed65ee670bbfbb7d7dc" Mar 12 21:26:11.509326 master-0 kubenswrapper[31456]: I0312 21:26:11.509316 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-stkxt" Mar 12 21:26:11.512330 master-0 kubenswrapper[31456]: I0312 21:26:11.512273 31456 generic.go:334] "Generic (PLEG): container finished" podID="d3b65a8f-9787-4ff9-91bc-a35ef39781ce" containerID="242fe52ac35236a90eedf0979b22b6148dd8cb3d2bc2da7d1e1ab1bcb1673c31" exitCode=0 Mar 12 21:26:11.512419 master-0 kubenswrapper[31456]: I0312 21:26:11.512382 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdtkg" event={"ID":"d3b65a8f-9787-4ff9-91bc-a35ef39781ce","Type":"ContainerDied","Data":"242fe52ac35236a90eedf0979b22b6148dd8cb3d2bc2da7d1e1ab1bcb1673c31"} Mar 12 21:26:12.122954 master-0 kubenswrapper[31456]: I0312 21:26:12.122798 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c76b45676-rfhd9"] Mar 12 21:26:12.124154 master-0 kubenswrapper[31456]: E0312 21:26:12.124128 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b466beef-2d58-41e2-b8cf-8090ab10be4e" containerName="placement-db-sync" Mar 12 21:26:12.124260 master-0 kubenswrapper[31456]: I0312 21:26:12.124245 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b466beef-2d58-41e2-b8cf-8090ab10be4e" containerName="placement-db-sync" Mar 12 21:26:12.125113 master-0 kubenswrapper[31456]: I0312 21:26:12.124620 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b466beef-2d58-41e2-b8cf-8090ab10be4e" containerName="placement-db-sync" Mar 12 21:26:12.126821 master-0 kubenswrapper[31456]: I0312 21:26:12.126781 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.132843 master-0 kubenswrapper[31456]: I0312 21:26:12.132763 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 21:26:12.133070 master-0 kubenswrapper[31456]: I0312 21:26:12.133020 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 21:26:12.135780 master-0 kubenswrapper[31456]: I0312 21:26:12.133366 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 21:26:12.135780 master-0 kubenswrapper[31456]: I0312 21:26:12.133577 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 21:26:12.170528 master-0 kubenswrapper[31456]: I0312 21:26:12.170422 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c76b45676-rfhd9"] Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.219550 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/205534d7-c857-4999-8352-af039951ce48-logs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.219652 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-scripts\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.219680 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-config-data\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.219703 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-internal-tls-certs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.219765 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6fxh\" (UniqueName: \"kubernetes.io/projected/205534d7-c857-4999-8352-af039951ce48-kube-api-access-d6fxh\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.219918 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-combined-ca-bundle\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.223830 master-0 kubenswrapper[31456]: I0312 21:26:12.220011 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-public-tls-certs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324027 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/205534d7-c857-4999-8352-af039951ce48-logs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324085 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-scripts\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324112 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-config-data\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324133 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-internal-tls-certs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324185 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6fxh\" (UniqueName: \"kubernetes.io/projected/205534d7-c857-4999-8352-af039951ce48-kube-api-access-d6fxh\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324260 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-combined-ca-bundle\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.324835 master-0 kubenswrapper[31456]: I0312 21:26:12.324297 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-public-tls-certs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.325231 master-0 kubenswrapper[31456]: I0312 21:26:12.325113 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/205534d7-c857-4999-8352-af039951ce48-logs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.333823 master-0 kubenswrapper[31456]: I0312 21:26:12.332615 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-config-data\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.334271 master-0 kubenswrapper[31456]: I0312 21:26:12.334227 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-scripts\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.341353 master-0 kubenswrapper[31456]: I0312 21:26:12.341323 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-public-tls-certs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.341533 master-0 kubenswrapper[31456]: I0312 21:26:12.341494 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-internal-tls-certs\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.348527 master-0 kubenswrapper[31456]: I0312 21:26:12.348484 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-combined-ca-bundle\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.368939 master-0 kubenswrapper[31456]: I0312 21:26:12.367459 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6fxh\" (UniqueName: \"kubernetes.io/projected/205534d7-c857-4999-8352-af039951ce48-kube-api-access-d6fxh\") pod \"placement-c76b45676-rfhd9\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:12.442884 master-0 kubenswrapper[31456]: I0312 21:26:12.442747 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:19.938021 master-0 kubenswrapper[31456]: I0312 21:26:19.937523 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:20.144000 master-0 kubenswrapper[31456]: I0312 21:26:20.143820 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-combined-ca-bundle\") pod \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " Mar 12 21:26:20.144000 master-0 kubenswrapper[31456]: I0312 21:26:20.143913 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-scripts\") pod \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " Mar 12 21:26:20.144289 master-0 kubenswrapper[31456]: I0312 21:26:20.144041 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdc5x\" (UniqueName: \"kubernetes.io/projected/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-kube-api-access-qdc5x\") pod \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " Mar 12 21:26:20.144289 master-0 kubenswrapper[31456]: I0312 21:26:20.144144 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-credential-keys\") pod \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " Mar 12 21:26:20.144289 master-0 kubenswrapper[31456]: I0312 21:26:20.144178 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-fernet-keys\") pod \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " Mar 12 21:26:20.144289 master-0 kubenswrapper[31456]: I0312 21:26:20.144245 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-config-data\") pod \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\" (UID: \"d3b65a8f-9787-4ff9-91bc-a35ef39781ce\") " Mar 12 21:26:20.147914 master-0 kubenswrapper[31456]: I0312 21:26:20.147855 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-scripts" (OuterVolumeSpecName: "scripts") pod "d3b65a8f-9787-4ff9-91bc-a35ef39781ce" (UID: "d3b65a8f-9787-4ff9-91bc-a35ef39781ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:20.149279 master-0 kubenswrapper[31456]: I0312 21:26:20.149239 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d3b65a8f-9787-4ff9-91bc-a35ef39781ce" (UID: "d3b65a8f-9787-4ff9-91bc-a35ef39781ce"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:20.149347 master-0 kubenswrapper[31456]: I0312 21:26:20.149313 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-kube-api-access-qdc5x" (OuterVolumeSpecName: "kube-api-access-qdc5x") pod "d3b65a8f-9787-4ff9-91bc-a35ef39781ce" (UID: "d3b65a8f-9787-4ff9-91bc-a35ef39781ce"). InnerVolumeSpecName "kube-api-access-qdc5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:20.151087 master-0 kubenswrapper[31456]: I0312 21:26:20.150999 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3b65a8f-9787-4ff9-91bc-a35ef39781ce" (UID: "d3b65a8f-9787-4ff9-91bc-a35ef39781ce"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:20.178673 master-0 kubenswrapper[31456]: I0312 21:26:20.178604 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3b65a8f-9787-4ff9-91bc-a35ef39781ce" (UID: "d3b65a8f-9787-4ff9-91bc-a35ef39781ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:20.179014 master-0 kubenswrapper[31456]: I0312 21:26:20.178964 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-config-data" (OuterVolumeSpecName: "config-data") pod "d3b65a8f-9787-4ff9-91bc-a35ef39781ce" (UID: "d3b65a8f-9787-4ff9-91bc-a35ef39781ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:20.251635 master-0 kubenswrapper[31456]: I0312 21:26:20.247848 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:20.251635 master-0 kubenswrapper[31456]: I0312 21:26:20.247900 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:20.251635 master-0 kubenswrapper[31456]: I0312 21:26:20.247916 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdc5x\" (UniqueName: \"kubernetes.io/projected/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-kube-api-access-qdc5x\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:20.251635 master-0 kubenswrapper[31456]: I0312 21:26:20.247930 31456 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:20.251635 master-0 kubenswrapper[31456]: I0312 21:26:20.248022 31456 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:20.251635 master-0 kubenswrapper[31456]: I0312 21:26:20.248199 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b65a8f-9787-4ff9-91bc-a35ef39781ce-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:20.463622 master-0 kubenswrapper[31456]: I0312 21:26:20.463525 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:26:20.619285 master-0 kubenswrapper[31456]: I0312 21:26:20.619202 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdtkg" event={"ID":"d3b65a8f-9787-4ff9-91bc-a35ef39781ce","Type":"ContainerDied","Data":"93a0079b6445f2910c04e9157c1678067360d48eb54952b98096e6b51263d380"} Mar 12 21:26:20.619285 master-0 kubenswrapper[31456]: I0312 21:26:20.619258 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a0079b6445f2910c04e9157c1678067360d48eb54952b98096e6b51263d380" Mar 12 21:26:20.619285 master-0 kubenswrapper[31456]: I0312 21:26:20.619280 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdtkg" Mar 12 21:26:21.238183 master-0 kubenswrapper[31456]: I0312 21:26:21.238083 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9f5c477c4-jk268"] Mar 12 21:26:21.257840 master-0 kubenswrapper[31456]: E0312 21:26:21.256498 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b65a8f-9787-4ff9-91bc-a35ef39781ce" containerName="keystone-bootstrap" Mar 12 21:26:21.257840 master-0 kubenswrapper[31456]: I0312 21:26:21.256551 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b65a8f-9787-4ff9-91bc-a35ef39781ce" containerName="keystone-bootstrap" Mar 12 21:26:21.257840 master-0 kubenswrapper[31456]: I0312 21:26:21.257130 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b65a8f-9787-4ff9-91bc-a35ef39781ce" containerName="keystone-bootstrap" Mar 12 21:26:21.258152 master-0 kubenswrapper[31456]: I0312 21:26:21.258075 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.262781 master-0 kubenswrapper[31456]: I0312 21:26:21.262630 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 21:26:21.262939 master-0 kubenswrapper[31456]: I0312 21:26:21.262795 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 21:26:21.262939 master-0 kubenswrapper[31456]: I0312 21:26:21.262909 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 21:26:21.263064 master-0 kubenswrapper[31456]: I0312 21:26:21.263037 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 21:26:21.263612 master-0 kubenswrapper[31456]: I0312 21:26:21.263570 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 21:26:21.272578 master-0 kubenswrapper[31456]: I0312 21:26:21.269256 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9f5c477c4-jk268"] Mar 12 21:26:21.306896 master-0 kubenswrapper[31456]: I0312 21:26:21.306771 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-public-tls-certs\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.306896 master-0 kubenswrapper[31456]: I0312 21:26:21.306852 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-credential-keys\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.307716 master-0 kubenswrapper[31456]: I0312 21:26:21.307094 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-fernet-keys\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.307716 master-0 kubenswrapper[31456]: I0312 21:26:21.307178 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-combined-ca-bundle\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.307716 master-0 kubenswrapper[31456]: I0312 21:26:21.307203 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-scripts\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.307716 master-0 kubenswrapper[31456]: I0312 21:26:21.307289 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-config-data\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.307716 master-0 kubenswrapper[31456]: I0312 21:26:21.307306 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc2t\" (UniqueName: \"kubernetes.io/projected/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-kube-api-access-5gc2t\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.307716 master-0 kubenswrapper[31456]: I0312 21:26:21.307324 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-internal-tls-certs\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.324074 master-0 kubenswrapper[31456]: W0312 21:26:21.324016 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7a5e241_7146_489b_b32b_01218601b895.slice/crio-3022b89911ba17ac12ffeaeb3177cde1d07fde534d7321a8dab8ab76e7c56a59 WatchSource:0}: Error finding container 3022b89911ba17ac12ffeaeb3177cde1d07fde534d7321a8dab8ab76e7c56a59: Status 404 returned error can't find the container with id 3022b89911ba17ac12ffeaeb3177cde1d07fde534d7321a8dab8ab76e7c56a59 Mar 12 21:26:21.409069 master-0 kubenswrapper[31456]: I0312 21:26:21.409032 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-config-data\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409173 master-0 kubenswrapper[31456]: I0312 21:26:21.409100 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gc2t\" (UniqueName: \"kubernetes.io/projected/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-kube-api-access-5gc2t\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409353 master-0 kubenswrapper[31456]: I0312 21:26:21.409296 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-internal-tls-certs\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409420 master-0 kubenswrapper[31456]: I0312 21:26:21.409411 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-public-tls-certs\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409510 master-0 kubenswrapper[31456]: I0312 21:26:21.409458 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-credential-keys\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409572 master-0 kubenswrapper[31456]: I0312 21:26:21.409545 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-fernet-keys\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409649 master-0 kubenswrapper[31456]: I0312 21:26:21.409629 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-combined-ca-bundle\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.409702 master-0 kubenswrapper[31456]: I0312 21:26:21.409659 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-scripts\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.413268 master-0 kubenswrapper[31456]: I0312 21:26:21.413244 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-public-tls-certs\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.424840 master-0 kubenswrapper[31456]: I0312 21:26:21.424615 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-fernet-keys\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.424840 master-0 kubenswrapper[31456]: I0312 21:26:21.424760 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-internal-tls-certs\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.424840 master-0 kubenswrapper[31456]: I0312 21:26:21.424796 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-credential-keys\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.425133 master-0 kubenswrapper[31456]: I0312 21:26:21.425079 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-config-data\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.425697 master-0 kubenswrapper[31456]: I0312 21:26:21.425680 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-combined-ca-bundle\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.427723 master-0 kubenswrapper[31456]: I0312 21:26:21.427689 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-scripts\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.427937 master-0 kubenswrapper[31456]: I0312 21:26:21.427906 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gc2t\" (UniqueName: \"kubernetes.io/projected/e600ff2e-e9f1-4f1c-86a7-cc278915ad77-kube-api-access-5gc2t\") pod \"keystone-9f5c477c4-jk268\" (UID: \"e600ff2e-e9f1-4f1c-86a7-cc278915ad77\") " pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.595887 master-0 kubenswrapper[31456]: I0312 21:26:21.594868 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:21.636789 master-0 kubenswrapper[31456]: I0312 21:26:21.636731 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"a7a5e241-7146-489b-b32b-01218601b895","Type":"ContainerStarted","Data":"3022b89911ba17ac12ffeaeb3177cde1d07fde534d7321a8dab8ab76e7c56a59"} Mar 12 21:26:21.982097 master-0 kubenswrapper[31456]: I0312 21:26:21.982054 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c76b45676-rfhd9"] Mar 12 21:26:22.000586 master-0 kubenswrapper[31456]: W0312 21:26:22.000421 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35a5b367_8419_4864_9317_7b78c50cad2d.slice/crio-77a709320345d7e7d74705720966cf45a4deb79fbb79f5916f2f9e376025b471 WatchSource:0}: Error finding container 77a709320345d7e7d74705720966cf45a4deb79fbb79f5916f2f9e376025b471: Status 404 returned error can't find the container with id 77a709320345d7e7d74705720966cf45a4deb79fbb79f5916f2f9e376025b471 Mar 12 21:26:22.003536 master-0 kubenswrapper[31456]: I0312 21:26:22.003498 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:26:22.123524 master-0 kubenswrapper[31456]: I0312 21:26:22.122490 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9f5c477c4-jk268"] Mar 12 21:26:22.131054 master-0 kubenswrapper[31456]: W0312 21:26:22.129765 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode600ff2e_e9f1_4f1c_86a7_cc278915ad77.slice/crio-ea2e4b931b8c43d5a6d93f96b2a087d150053cb3fa1ba256b70c6577674db2ba WatchSource:0}: Error finding container ea2e4b931b8c43d5a6d93f96b2a087d150053cb3fa1ba256b70c6577674db2ba: Status 404 returned error can't find the container with id ea2e4b931b8c43d5a6d93f96b2a087d150053cb3fa1ba256b70c6577674db2ba Mar 12 21:26:22.661493 master-0 kubenswrapper[31456]: I0312 21:26:22.661444 31456 generic.go:334] "Generic (PLEG): container finished" podID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerID="9436f9ca665a3e98dd43a40bba94f9d50dd6dc1cb3339c8ff993b9d754749ac2" exitCode=0 Mar 12 21:26:22.661965 master-0 kubenswrapper[31456]: I0312 21:26:22.661535 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-cf2v5" event={"ID":"64b63a16-1c32-45a8-92f8-8ce00c2c6be8","Type":"ContainerDied","Data":"9436f9ca665a3e98dd43a40bba94f9d50dd6dc1cb3339c8ff993b9d754749ac2"} Mar 12 21:26:22.664455 master-0 kubenswrapper[31456]: I0312 21:26:22.664428 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"35a5b367-8419-4864-9317-7b78c50cad2d","Type":"ContainerStarted","Data":"ba8530f2f78010e06d6c86db3104bf8949d730b429ac2e43b76e663f1b5dddbc"} Mar 12 21:26:22.664536 master-0 kubenswrapper[31456]: I0312 21:26:22.664456 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"35a5b367-8419-4864-9317-7b78c50cad2d","Type":"ContainerStarted","Data":"77a709320345d7e7d74705720966cf45a4deb79fbb79f5916f2f9e376025b471"} Mar 12 21:26:22.669134 master-0 kubenswrapper[31456]: I0312 21:26:22.669066 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9f5c477c4-jk268" event={"ID":"e600ff2e-e9f1-4f1c-86a7-cc278915ad77","Type":"ContainerStarted","Data":"6d3da4843bb2da6ffc8a5edc9a269bb0f04a74010c9d0cbec3ad14f9643fb323"} Mar 12 21:26:22.669134 master-0 kubenswrapper[31456]: I0312 21:26:22.669113 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9f5c477c4-jk268" event={"ID":"e600ff2e-e9f1-4f1c-86a7-cc278915ad77","Type":"ContainerStarted","Data":"ea2e4b931b8c43d5a6d93f96b2a087d150053cb3fa1ba256b70c6577674db2ba"} Mar 12 21:26:22.670589 master-0 kubenswrapper[31456]: I0312 21:26:22.669702 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:22.671150 master-0 kubenswrapper[31456]: I0312 21:26:22.671106 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"a7a5e241-7146-489b-b32b-01218601b895","Type":"ContainerStarted","Data":"4d3b0e96c1344df5da8bdeecb9531de6467994887ed3979c2ec39258b249f08a"} Mar 12 21:26:22.675626 master-0 kubenswrapper[31456]: I0312 21:26:22.675597 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c76b45676-rfhd9" event={"ID":"205534d7-c857-4999-8352-af039951ce48","Type":"ContainerStarted","Data":"1f48d28ae3c63c4e8d566362287a7917c8e5cd496a34ef1f771eb022fd9c7ae7"} Mar 12 21:26:22.675626 master-0 kubenswrapper[31456]: I0312 21:26:22.675620 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c76b45676-rfhd9" event={"ID":"205534d7-c857-4999-8352-af039951ce48","Type":"ContainerStarted","Data":"3f751c249ba0054b38eacd67b6f5916bd4354af3bd74b44420200444714551c9"} Mar 12 21:26:22.675783 master-0 kubenswrapper[31456]: I0312 21:26:22.675631 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c76b45676-rfhd9" event={"ID":"205534d7-c857-4999-8352-af039951ce48","Type":"ContainerStarted","Data":"94624f48e9d67803e576bcdc7e65a35641d2c577953fe412df0a6befc3c33816"} Mar 12 21:26:22.675783 master-0 kubenswrapper[31456]: I0312 21:26:22.675724 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:22.675783 master-0 kubenswrapper[31456]: I0312 21:26:22.675736 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:22.677230 master-0 kubenswrapper[31456]: I0312 21:26:22.677188 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-db-sync-v8z2w" event={"ID":"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1","Type":"ContainerStarted","Data":"ced737fffae6cb52f2e71516a383f3bc81ed8d30f3c2cfa34fc82780dde4441f"} Mar 12 21:26:22.723080 master-0 kubenswrapper[31456]: I0312 21:26:22.722482 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c76b45676-rfhd9" podStartSLOduration=10.722457308 podStartE2EDuration="10.722457308s" podCreationTimestamp="2026-03-12 21:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:22.711250817 +0000 UTC m=+1043.785856145" watchObservedRunningTime="2026-03-12 21:26:22.722457308 +0000 UTC m=+1043.797062656" Mar 12 21:26:22.749592 master-0 kubenswrapper[31456]: I0312 21:26:22.749514 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-db-sync-v8z2w" podStartSLOduration=3.347417577 podStartE2EDuration="31.749496973s" podCreationTimestamp="2026-03-12 21:25:51 +0000 UTC" firstStartedPulling="2026-03-12 21:25:53.039487672 +0000 UTC m=+1014.114093000" lastFinishedPulling="2026-03-12 21:26:21.441567068 +0000 UTC m=+1042.516172396" observedRunningTime="2026-03-12 21:26:22.73039383 +0000 UTC m=+1043.804999158" watchObservedRunningTime="2026-03-12 21:26:22.749496973 +0000 UTC m=+1043.824102301" Mar 12 21:26:22.768884 master-0 kubenswrapper[31456]: I0312 21:26:22.767735 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9f5c477c4-jk268" podStartSLOduration=1.767715774 podStartE2EDuration="1.767715774s" podCreationTimestamp="2026-03-12 21:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:22.753459679 +0000 UTC m=+1043.828065027" watchObservedRunningTime="2026-03-12 21:26:22.767715774 +0000 UTC m=+1043.842321092" Mar 12 21:26:23.690797 master-0 kubenswrapper[31456]: I0312 21:26:23.690754 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-cf2v5" event={"ID":"64b63a16-1c32-45a8-92f8-8ce00c2c6be8","Type":"ContainerStarted","Data":"2b1cb764ba0198fdcc3a1a8ca42b9161a896f0fbd20c21a3fe120df1d21a60f3"} Mar 12 21:26:23.694000 master-0 kubenswrapper[31456]: I0312 21:26:23.693938 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"35a5b367-8419-4864-9317-7b78c50cad2d","Type":"ContainerStarted","Data":"76119fa19412e6d332c800f93cccb67214a613c3a21e08876e4c96a60312f18b"} Mar 12 21:26:23.698437 master-0 kubenswrapper[31456]: I0312 21:26:23.698188 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"a7a5e241-7146-489b-b32b-01218601b895","Type":"ContainerStarted","Data":"10e54504f9f158d2ff034d14f847a2344e2841dae80b2aedb91058874103c1ad"} Mar 12 21:26:23.825122 master-0 kubenswrapper[31456]: I0312 21:26:23.824845 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-cf2v5" podStartSLOduration=4.390132056 podStartE2EDuration="22.824824879s" podCreationTimestamp="2026-03-12 21:26:01 +0000 UTC" firstStartedPulling="2026-03-12 21:26:02.968464634 +0000 UTC m=+1024.043069962" lastFinishedPulling="2026-03-12 21:26:21.403157457 +0000 UTC m=+1042.477762785" observedRunningTime="2026-03-12 21:26:23.796152855 +0000 UTC m=+1044.870758193" watchObservedRunningTime="2026-03-12 21:26:23.824824879 +0000 UTC m=+1044.899430217" Mar 12 21:26:23.877636 master-0 kubenswrapper[31456]: I0312 21:26:23.877338 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-30e4b-default-external-api-0" podStartSLOduration=17.877308809 podStartE2EDuration="17.877308809s" podCreationTimestamp="2026-03-12 21:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:23.853478322 +0000 UTC m=+1044.928083690" watchObservedRunningTime="2026-03-12 21:26:23.877308809 +0000 UTC m=+1044.951914167" Mar 12 21:26:23.986269 master-0 kubenswrapper[31456]: I0312 21:26:23.986103 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-30e4b-default-internal-api-0" podStartSLOduration=19.986079082 podStartE2EDuration="19.986079082s" podCreationTimestamp="2026-03-12 21:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:23.969973682 +0000 UTC m=+1045.044579020" watchObservedRunningTime="2026-03-12 21:26:23.986079082 +0000 UTC m=+1045.060684410" Mar 12 21:26:25.726542 master-0 kubenswrapper[31456]: I0312 21:26:25.726475 31456 generic.go:334] "Generic (PLEG): container finished" podID="fdf62a30-2c59-4043-99d7-b51fe604f823" containerID="fa51060e34dcf4d112ce1124184c5ff33b338b562fbf68e12f45476b0eda6c20" exitCode=0 Mar 12 21:26:25.727215 master-0 kubenswrapper[31456]: I0312 21:26:25.726574 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qs8v4" event={"ID":"fdf62a30-2c59-4043-99d7-b51fe604f823","Type":"ContainerDied","Data":"fa51060e34dcf4d112ce1124184c5ff33b338b562fbf68e12f45476b0eda6c20"} Mar 12 21:26:26.758403 master-0 kubenswrapper[31456]: I0312 21:26:26.758319 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:26.758403 master-0 kubenswrapper[31456]: I0312 21:26:26.758403 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:26.817260 master-0 kubenswrapper[31456]: I0312 21:26:26.817178 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:26.818445 master-0 kubenswrapper[31456]: I0312 21:26:26.817623 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:27.227189 master-0 kubenswrapper[31456]: I0312 21:26:27.227110 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:26:27.250903 master-0 kubenswrapper[31456]: I0312 21:26:27.249025 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-combined-ca-bundle\") pod \"fdf62a30-2c59-4043-99d7-b51fe604f823\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " Mar 12 21:26:27.250903 master-0 kubenswrapper[31456]: I0312 21:26:27.249660 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-config\") pod \"fdf62a30-2c59-4043-99d7-b51fe604f823\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " Mar 12 21:26:27.250903 master-0 kubenswrapper[31456]: I0312 21:26:27.249853 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvcdn\" (UniqueName: \"kubernetes.io/projected/fdf62a30-2c59-4043-99d7-b51fe604f823-kube-api-access-kvcdn\") pod \"fdf62a30-2c59-4043-99d7-b51fe604f823\" (UID: \"fdf62a30-2c59-4043-99d7-b51fe604f823\") " Mar 12 21:26:27.256838 master-0 kubenswrapper[31456]: I0312 21:26:27.256250 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf62a30-2c59-4043-99d7-b51fe604f823-kube-api-access-kvcdn" (OuterVolumeSpecName: "kube-api-access-kvcdn") pod "fdf62a30-2c59-4043-99d7-b51fe604f823" (UID: "fdf62a30-2c59-4043-99d7-b51fe604f823"). InnerVolumeSpecName "kube-api-access-kvcdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:27.291840 master-0 kubenswrapper[31456]: I0312 21:26:27.291182 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-config" (OuterVolumeSpecName: "config") pod "fdf62a30-2c59-4043-99d7-b51fe604f823" (UID: "fdf62a30-2c59-4043-99d7-b51fe604f823"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:27.299949 master-0 kubenswrapper[31456]: I0312 21:26:27.297166 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdf62a30-2c59-4043-99d7-b51fe604f823" (UID: "fdf62a30-2c59-4043-99d7-b51fe604f823"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:27.352307 master-0 kubenswrapper[31456]: I0312 21:26:27.352229 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:27.352307 master-0 kubenswrapper[31456]: I0312 21:26:27.352285 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf62a30-2c59-4043-99d7-b51fe604f823-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:27.352307 master-0 kubenswrapper[31456]: I0312 21:26:27.352295 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvcdn\" (UniqueName: \"kubernetes.io/projected/fdf62a30-2c59-4043-99d7-b51fe604f823-kube-api-access-kvcdn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:27.758020 master-0 kubenswrapper[31456]: I0312 21:26:27.757948 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qs8v4" event={"ID":"fdf62a30-2c59-4043-99d7-b51fe604f823","Type":"ContainerDied","Data":"224541f9aa782302ac73456f82b84c614df3c424954fd2545a50b2adf7660d0c"} Mar 12 21:26:27.758020 master-0 kubenswrapper[31456]: I0312 21:26:27.757998 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="224541f9aa782302ac73456f82b84c614df3c424954fd2545a50b2adf7660d0c" Mar 12 21:26:27.758020 master-0 kubenswrapper[31456]: I0312 21:26:27.758020 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:27.758369 master-0 kubenswrapper[31456]: I0312 21:26:27.758068 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qs8v4" Mar 12 21:26:27.758369 master-0 kubenswrapper[31456]: I0312 21:26:27.758196 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:28.165014 master-0 kubenswrapper[31456]: I0312 21:26:28.164925 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cfd77ccd9-2nhnf"] Mar 12 21:26:28.165547 master-0 kubenswrapper[31456]: E0312 21:26:28.165529 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf62a30-2c59-4043-99d7-b51fe604f823" containerName="neutron-db-sync" Mar 12 21:26:28.165584 master-0 kubenswrapper[31456]: I0312 21:26:28.165548 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf62a30-2c59-4043-99d7-b51fe604f823" containerName="neutron-db-sync" Mar 12 21:26:28.165801 master-0 kubenswrapper[31456]: I0312 21:26:28.165773 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf62a30-2c59-4043-99d7-b51fe604f823" containerName="neutron-db-sync" Mar 12 21:26:28.166950 master-0 kubenswrapper[31456]: I0312 21:26:28.166921 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.232372 master-0 kubenswrapper[31456]: I0312 21:26:28.232132 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cfd77ccd9-2nhnf"] Mar 12 21:26:28.251893 master-0 kubenswrapper[31456]: I0312 21:26:28.251735 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b7fc99fd8-pc4wq"] Mar 12 21:26:28.254052 master-0 kubenswrapper[31456]: I0312 21:26:28.253765 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.256596 master-0 kubenswrapper[31456]: I0312 21:26:28.256517 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 21:26:28.257484 master-0 kubenswrapper[31456]: I0312 21:26:28.257052 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 21:26:28.257484 master-0 kubenswrapper[31456]: I0312 21:26:28.257008 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 21:26:28.262384 master-0 kubenswrapper[31456]: I0312 21:26:28.262347 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b7fc99fd8-pc4wq"] Mar 12 21:26:28.284029 master-0 kubenswrapper[31456]: I0312 21:26:28.283973 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-swift-storage-0\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.284029 master-0 kubenswrapper[31456]: I0312 21:26:28.284029 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-httpd-config\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.284300 master-0 kubenswrapper[31456]: I0312 21:26:28.284057 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-sb\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.284300 master-0 kubenswrapper[31456]: I0312 21:26:28.284172 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-combined-ca-bundle\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.284385 master-0 kubenswrapper[31456]: I0312 21:26:28.284321 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-svc\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.284419 master-0 kubenswrapper[31456]: I0312 21:26:28.284410 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-config\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.284628 master-0 kubenswrapper[31456]: I0312 21:26:28.284580 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbm25\" (UniqueName: \"kubernetes.io/projected/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-kube-api-access-jbm25\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.284697 master-0 kubenswrapper[31456]: I0312 21:26:28.284635 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-nb\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.284738 master-0 kubenswrapper[31456]: I0312 21:26:28.284692 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-ovndb-tls-certs\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.284772 master-0 kubenswrapper[31456]: I0312 21:26:28.284758 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-config\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.284802 master-0 kubenswrapper[31456]: I0312 21:26:28.284791 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdf7\" (UniqueName: \"kubernetes.io/projected/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-kube-api-access-hhdf7\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.386864 master-0 kubenswrapper[31456]: I0312 21:26:28.386599 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbm25\" (UniqueName: \"kubernetes.io/projected/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-kube-api-access-jbm25\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.386864 master-0 kubenswrapper[31456]: I0312 21:26:28.386703 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-nb\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.386864 master-0 kubenswrapper[31456]: I0312 21:26:28.386744 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-ovndb-tls-certs\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.387135 master-0 kubenswrapper[31456]: I0312 21:26:28.387068 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-config\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.387214 master-0 kubenswrapper[31456]: I0312 21:26:28.387171 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhdf7\" (UniqueName: \"kubernetes.io/projected/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-kube-api-access-hhdf7\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.387294 master-0 kubenswrapper[31456]: I0312 21:26:28.387255 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-swift-storage-0\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.387344 master-0 kubenswrapper[31456]: I0312 21:26:28.387294 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-httpd-config\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.387344 master-0 kubenswrapper[31456]: I0312 21:26:28.387335 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-sb\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.387433 master-0 kubenswrapper[31456]: I0312 21:26:28.387384 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-combined-ca-bundle\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.387497 master-0 kubenswrapper[31456]: I0312 21:26:28.387454 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-svc\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.387558 master-0 kubenswrapper[31456]: I0312 21:26:28.387512 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-config\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.387734 master-0 kubenswrapper[31456]: I0312 21:26:28.387703 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-nb\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.388213 master-0 kubenswrapper[31456]: I0312 21:26:28.388183 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-swift-storage-0\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.388641 master-0 kubenswrapper[31456]: I0312 21:26:28.388614 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-sb\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.388641 master-0 kubenswrapper[31456]: I0312 21:26:28.388630 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-config\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.389729 master-0 kubenswrapper[31456]: I0312 21:26:28.389696 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-svc\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.392359 master-0 kubenswrapper[31456]: I0312 21:26:28.391795 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-combined-ca-bundle\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.392359 master-0 kubenswrapper[31456]: I0312 21:26:28.392043 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-config\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.392670 master-0 kubenswrapper[31456]: I0312 21:26:28.392649 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-httpd-config\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.403627 master-0 kubenswrapper[31456]: I0312 21:26:28.403509 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbm25\" (UniqueName: \"kubernetes.io/projected/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-kube-api-access-jbm25\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.405310 master-0 kubenswrapper[31456]: I0312 21:26:28.405267 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-ovndb-tls-certs\") pod \"neutron-7b7fc99fd8-pc4wq\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.406335 master-0 kubenswrapper[31456]: I0312 21:26:28.406304 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhdf7\" (UniqueName: \"kubernetes.io/projected/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-kube-api-access-hhdf7\") pod \"dnsmasq-dns-5cfd77ccd9-2nhnf\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.522384 master-0 kubenswrapper[31456]: I0312 21:26:28.522310 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:28.578871 master-0 kubenswrapper[31456]: I0312 21:26:28.576412 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:28.619761 master-0 kubenswrapper[31456]: I0312 21:26:28.610114 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:28.619761 master-0 kubenswrapper[31456]: I0312 21:26:28.610174 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:28.662378 master-0 kubenswrapper[31456]: I0312 21:26:28.650060 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:28.738851 master-0 kubenswrapper[31456]: I0312 21:26:28.738508 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:28.780839 master-0 kubenswrapper[31456]: I0312 21:26:28.780783 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:28.781071 master-0 kubenswrapper[31456]: I0312 21:26:28.780857 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:29.151941 master-0 kubenswrapper[31456]: I0312 21:26:29.150212 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cfd77ccd9-2nhnf"] Mar 12 21:26:29.360862 master-0 kubenswrapper[31456]: I0312 21:26:29.360048 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b7fc99fd8-pc4wq"] Mar 12 21:26:29.827935 master-0 kubenswrapper[31456]: I0312 21:26:29.827869 31456 generic.go:334] "Generic (PLEG): container finished" podID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerID="8ef283b558a8fa0b46e6854df2a33eccfd9b960c265628ae9e4e7845362a6a50" exitCode=0 Mar 12 21:26:29.828293 master-0 kubenswrapper[31456]: I0312 21:26:29.827961 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" event={"ID":"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9","Type":"ContainerDied","Data":"8ef283b558a8fa0b46e6854df2a33eccfd9b960c265628ae9e4e7845362a6a50"} Mar 12 21:26:29.828293 master-0 kubenswrapper[31456]: I0312 21:26:29.827992 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" event={"ID":"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9","Type":"ContainerStarted","Data":"584977f008e8a6587ca5b2e9468031e7e390ffaac6e656669fe84bcc490537fc"} Mar 12 21:26:29.830035 master-0 kubenswrapper[31456]: I0312 21:26:29.829670 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:26:29.830035 master-0 kubenswrapper[31456]: I0312 21:26:29.829713 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:26:29.830214 master-0 kubenswrapper[31456]: I0312 21:26:29.830176 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b7fc99fd8-pc4wq" event={"ID":"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0","Type":"ContainerStarted","Data":"36ab01fed5375e4747f7129c6733f87baaa9d5c953918b12de5a57a849675155"} Mar 12 21:26:30.841692 master-0 kubenswrapper[31456]: I0312 21:26:30.841630 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" event={"ID":"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9","Type":"ContainerStarted","Data":"369b9c611de917c4d5b84766274b2790f9276d0fe6cf2b893b2980d4e79c0e80"} Mar 12 21:26:30.842938 master-0 kubenswrapper[31456]: I0312 21:26:30.842908 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:30.845255 master-0 kubenswrapper[31456]: I0312 21:26:30.845220 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:26:30.845255 master-0 kubenswrapper[31456]: I0312 21:26:30.845241 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:26:30.846368 master-0 kubenswrapper[31456]: I0312 21:26:30.846330 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b7fc99fd8-pc4wq" event={"ID":"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0","Type":"ContainerStarted","Data":"f4238bf455a2a08c5c82da0b82cba6320522be3626dfeecaf288204b852636a7"} Mar 12 21:26:30.846368 master-0 kubenswrapper[31456]: I0312 21:26:30.846364 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:30.846466 master-0 kubenswrapper[31456]: I0312 21:26:30.846375 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b7fc99fd8-pc4wq" event={"ID":"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0","Type":"ContainerStarted","Data":"f4a0172384f033272e2a0a23a455d0f73b3a58630e7b76c5147f00a0b1cb6fe8"} Mar 12 21:26:30.873485 master-0 kubenswrapper[31456]: I0312 21:26:30.873425 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-794f5bbfcf-tg98t"] Mar 12 21:26:30.875501 master-0 kubenswrapper[31456]: I0312 21:26:30.875464 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.878791 master-0 kubenswrapper[31456]: I0312 21:26:30.878724 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 21:26:30.879149 master-0 kubenswrapper[31456]: I0312 21:26:30.878817 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 21:26:30.925277 master-0 kubenswrapper[31456]: I0312 21:26:30.906564 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-794f5bbfcf-tg98t"] Mar 12 21:26:30.925277 master-0 kubenswrapper[31456]: I0312 21:26:30.910599 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" podStartSLOduration=2.910575446 podStartE2EDuration="2.910575446s" podCreationTimestamp="2026-03-12 21:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:30.880123878 +0000 UTC m=+1051.954729206" watchObservedRunningTime="2026-03-12 21:26:30.910575446 +0000 UTC m=+1051.985180784" Mar 12 21:26:30.962125 master-0 kubenswrapper[31456]: I0312 21:26:30.962067 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-combined-ca-bundle\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.962442 master-0 kubenswrapper[31456]: I0312 21:26:30.962219 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-httpd-config\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.962442 master-0 kubenswrapper[31456]: I0312 21:26:30.962252 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-public-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.962442 master-0 kubenswrapper[31456]: I0312 21:26:30.962270 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-internal-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.962442 master-0 kubenswrapper[31456]: I0312 21:26:30.962326 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-ovndb-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.962442 master-0 kubenswrapper[31456]: I0312 21:26:30.962404 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drjgp\" (UniqueName: \"kubernetes.io/projected/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-kube-api-access-drjgp\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.962442 master-0 kubenswrapper[31456]: I0312 21:26:30.962441 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-config\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:30.964022 master-0 kubenswrapper[31456]: I0312 21:26:30.963943 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b7fc99fd8-pc4wq" podStartSLOduration=2.963902646 podStartE2EDuration="2.963902646s" podCreationTimestamp="2026-03-12 21:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:30.952009618 +0000 UTC m=+1052.026614966" watchObservedRunningTime="2026-03-12 21:26:30.963902646 +0000 UTC m=+1052.038507984" Mar 12 21:26:31.064754 master-0 kubenswrapper[31456]: I0312 21:26:31.064680 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-combined-ca-bundle\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.065121 master-0 kubenswrapper[31456]: I0312 21:26:31.065067 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-httpd-config\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.065171 master-0 kubenswrapper[31456]: I0312 21:26:31.065120 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-public-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.065368 master-0 kubenswrapper[31456]: I0312 21:26:31.065305 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-internal-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.065503 master-0 kubenswrapper[31456]: I0312 21:26:31.065478 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-ovndb-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.065649 master-0 kubenswrapper[31456]: I0312 21:26:31.065628 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drjgp\" (UniqueName: \"kubernetes.io/projected/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-kube-api-access-drjgp\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.065695 master-0 kubenswrapper[31456]: I0312 21:26:31.065670 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-config\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.071087 master-0 kubenswrapper[31456]: I0312 21:26:31.071039 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-ovndb-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.071406 master-0 kubenswrapper[31456]: I0312 21:26:31.071369 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-public-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.071453 master-0 kubenswrapper[31456]: I0312 21:26:31.071418 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-combined-ca-bundle\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.071835 master-0 kubenswrapper[31456]: I0312 21:26:31.071780 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-config\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.072238 master-0 kubenswrapper[31456]: I0312 21:26:31.072190 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-internal-tls-certs\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.073081 master-0 kubenswrapper[31456]: I0312 21:26:31.073042 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-httpd-config\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.096991 master-0 kubenswrapper[31456]: I0312 21:26:31.096745 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drjgp\" (UniqueName: \"kubernetes.io/projected/86a41c6d-2c15-4d4c-b6d6-dc64e41f89de-kube-api-access-drjgp\") pod \"neutron-794f5bbfcf-tg98t\" (UID: \"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de\") " pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.199693 master-0 kubenswrapper[31456]: I0312 21:26:31.199612 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:31.578830 master-0 kubenswrapper[31456]: I0312 21:26:31.575707 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:31.585250 master-0 kubenswrapper[31456]: I0312 21:26:31.585207 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:26:31.588910 master-0 kubenswrapper[31456]: I0312 21:26:31.588694 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:31.588910 master-0 kubenswrapper[31456]: I0312 21:26:31.588848 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:26:31.645325 master-0 kubenswrapper[31456]: I0312 21:26:31.645277 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:26:31.889645 master-0 kubenswrapper[31456]: I0312 21:26:31.889194 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-794f5bbfcf-tg98t"] Mar 12 21:26:32.888962 master-0 kubenswrapper[31456]: I0312 21:26:32.888915 31456 generic.go:334] "Generic (PLEG): container finished" podID="9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" containerID="ced737fffae6cb52f2e71516a383f3bc81ed8d30f3c2cfa34fc82780dde4441f" exitCode=0 Mar 12 21:26:32.889374 master-0 kubenswrapper[31456]: I0312 21:26:32.889010 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-db-sync-v8z2w" event={"ID":"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1","Type":"ContainerDied","Data":"ced737fffae6cb52f2e71516a383f3bc81ed8d30f3c2cfa34fc82780dde4441f"} Mar 12 21:26:32.893458 master-0 kubenswrapper[31456]: I0312 21:26:32.893367 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794f5bbfcf-tg98t" event={"ID":"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de","Type":"ContainerStarted","Data":"ed70d300589487df25d0c7f4b2477dcaa28272478565db0a567ad8bceb8191f1"} Mar 12 21:26:32.893458 master-0 kubenswrapper[31456]: I0312 21:26:32.893458 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794f5bbfcf-tg98t" event={"ID":"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de","Type":"ContainerStarted","Data":"9bec065d4763302e9a7a6d7c3f1b014d2937f5e25c81af62d0e3d1be9413af19"} Mar 12 21:26:32.893939 master-0 kubenswrapper[31456]: I0312 21:26:32.893472 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794f5bbfcf-tg98t" event={"ID":"86a41c6d-2c15-4d4c-b6d6-dc64e41f89de","Type":"ContainerStarted","Data":"45204c6cae4256b7ae897f2872da5ef425443f7acd706c666e0c25055ea42e93"} Mar 12 21:26:32.962875 master-0 kubenswrapper[31456]: I0312 21:26:32.962774 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-794f5bbfcf-tg98t" podStartSLOduration=2.962749043 podStartE2EDuration="2.962749043s" podCreationTimestamp="2026-03-12 21:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:32.941740255 +0000 UTC m=+1054.016345593" watchObservedRunningTime="2026-03-12 21:26:32.962749043 +0000 UTC m=+1054.037354371" Mar 12 21:26:33.911711 master-0 kubenswrapper[31456]: I0312 21:26:33.910030 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:26:34.373610 master-0 kubenswrapper[31456]: I0312 21:26:34.373556 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:26:34.471758 master-0 kubenswrapper[31456]: I0312 21:26:34.471669 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-scripts\") pod \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " Mar 12 21:26:34.472192 master-0 kubenswrapper[31456]: I0312 21:26:34.471777 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-config-data\") pod \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " Mar 12 21:26:34.472192 master-0 kubenswrapper[31456]: I0312 21:26:34.471882 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-db-sync-config-data\") pod \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " Mar 12 21:26:34.472192 master-0 kubenswrapper[31456]: I0312 21:26:34.472020 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-combined-ca-bundle\") pod \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " Mar 12 21:26:34.472192 master-0 kubenswrapper[31456]: I0312 21:26:34.472114 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8psh\" (UniqueName: \"kubernetes.io/projected/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-kube-api-access-l8psh\") pod \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " Mar 12 21:26:34.472350 master-0 kubenswrapper[31456]: I0312 21:26:34.472207 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-etc-machine-id\") pod \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\" (UID: \"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1\") " Mar 12 21:26:34.473027 master-0 kubenswrapper[31456]: I0312 21:26:34.472984 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" (UID: "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:34.479187 master-0 kubenswrapper[31456]: I0312 21:26:34.478911 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-scripts" (OuterVolumeSpecName: "scripts") pod "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" (UID: "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:34.481938 master-0 kubenswrapper[31456]: I0312 21:26:34.480791 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" (UID: "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:34.481938 master-0 kubenswrapper[31456]: I0312 21:26:34.480834 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-kube-api-access-l8psh" (OuterVolumeSpecName: "kube-api-access-l8psh") pod "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" (UID: "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1"). InnerVolumeSpecName "kube-api-access-l8psh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:34.521927 master-0 kubenswrapper[31456]: I0312 21:26:34.521846 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" (UID: "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:34.529106 master-0 kubenswrapper[31456]: I0312 21:26:34.529057 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-config-data" (OuterVolumeSpecName: "config-data") pod "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" (UID: "9b9d5522-06bf-4e0e-a3fc-ab594cb040a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:34.577668 master-0 kubenswrapper[31456]: I0312 21:26:34.577608 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:34.577770 master-0 kubenswrapper[31456]: I0312 21:26:34.577674 31456 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:34.577770 master-0 kubenswrapper[31456]: I0312 21:26:34.577688 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:34.577770 master-0 kubenswrapper[31456]: I0312 21:26:34.577699 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8psh\" (UniqueName: \"kubernetes.io/projected/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-kube-api-access-l8psh\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:34.577770 master-0 kubenswrapper[31456]: I0312 21:26:34.577708 31456 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:34.577770 master-0 kubenswrapper[31456]: I0312 21:26:34.577716 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9d5522-06bf-4e0e-a3fc-ab594cb040a1-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:34.926906 master-0 kubenswrapper[31456]: I0312 21:26:34.926829 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-db-sync-v8z2w" event={"ID":"9b9d5522-06bf-4e0e-a3fc-ab594cb040a1","Type":"ContainerDied","Data":"aef1a7e7a3c93adc9d9a5e903bd81dd04b52053d8c42e9f0ead8d496691cfd68"} Mar 12 21:26:34.926906 master-0 kubenswrapper[31456]: I0312 21:26:34.926898 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aef1a7e7a3c93adc9d9a5e903bd81dd04b52053d8c42e9f0ead8d496691cfd68" Mar 12 21:26:34.927915 master-0 kubenswrapper[31456]: I0312 21:26:34.926868 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-db-sync-v8z2w" Mar 12 21:26:35.433767 master-0 kubenswrapper[31456]: I0312 21:26:35.433694 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:35.434455 master-0 kubenswrapper[31456]: E0312 21:26:35.434379 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" containerName="cinder-7fa7f-db-sync" Mar 12 21:26:35.434455 master-0 kubenswrapper[31456]: I0312 21:26:35.434404 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" containerName="cinder-7fa7f-db-sync" Mar 12 21:26:35.434745 master-0 kubenswrapper[31456]: I0312 21:26:35.434715 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9d5522-06bf-4e0e-a3fc-ab594cb040a1" containerName="cinder-7fa7f-db-sync" Mar 12 21:26:35.436748 master-0 kubenswrapper[31456]: I0312 21:26:35.436118 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.456983 master-0 kubenswrapper[31456]: I0312 21:26:35.454511 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-config-data" Mar 12 21:26:35.456983 master-0 kubenswrapper[31456]: I0312 21:26:35.455216 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-scheduler-config-data" Mar 12 21:26:35.456983 master-0 kubenswrapper[31456]: I0312 21:26:35.455442 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-scripts" Mar 12 21:26:35.456983 master-0 kubenswrapper[31456]: I0312 21:26:35.455532 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:35.470839 master-0 kubenswrapper[31456]: I0312 21:26:35.470092 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:35.472776 master-0 kubenswrapper[31456]: I0312 21:26:35.472744 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.481372 master-0 kubenswrapper[31456]: I0312 21:26:35.479684 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-volume-lvm-iscsi-config-data" Mar 12 21:26:35.482833 master-0 kubenswrapper[31456]: I0312 21:26:35.482471 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:35.518831 master-0 kubenswrapper[31456]: I0312 21:26:35.515337 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-scripts\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.518831 master-0 kubenswrapper[31456]: I0312 21:26:35.515464 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.518831 master-0 kubenswrapper[31456]: I0312 21:26:35.515558 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data-custom\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.518831 master-0 kubenswrapper[31456]: I0312 21:26:35.515579 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.518831 master-0 kubenswrapper[31456]: I0312 21:26:35.515609 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcm2w\" (UniqueName: \"kubernetes.io/projected/8a2f5eb4-3eff-4449-829b-2701ab9b6965-kube-api-access-qcm2w\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.518831 master-0 kubenswrapper[31456]: I0312 21:26:35.515650 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a2f5eb4-3eff-4449-829b-2701ab9b6965-etc-machine-id\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.565912 master-0 kubenswrapper[31456]: I0312 21:26:35.564226 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:35.569836 master-0 kubenswrapper[31456]: I0312 21:26:35.567387 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.577841 master-0 kubenswrapper[31456]: I0312 21:26:35.576157 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-backup-config-data" Mar 12 21:26:35.627840 master-0 kubenswrapper[31456]: I0312 21:26:35.623485 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.632877 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a2f5eb4-3eff-4449-829b-2701ab9b6965-etc-machine-id\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.632996 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-brick\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633050 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-iscsi\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633162 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-scripts\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633185 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data-custom\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633220 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-scripts\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633251 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633272 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-scripts\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633305 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-nvme\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633339 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-lib-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633378 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-sys\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633408 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-machine-id\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633914 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-dev\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633944 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vtz\" (UniqueName: \"kubernetes.io/projected/30465684-0661-4306-8903-d8aa99f95fd7-kube-api-access-68vtz\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.633970 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634023 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-lib-modules\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634071 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-machine-id\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634123 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-iscsi\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634159 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634183 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data-custom\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634231 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-lib-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634268 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634690 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-lib-modules\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634728 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-dev\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634789 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-combined-ca-bundle\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634855 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data-custom\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634891 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634919 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6blsv\" (UniqueName: \"kubernetes.io/projected/87e93241-daea-4fbc-b947-8edb8b8ea521-kube-api-access-6blsv\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.634959 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-nvme\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635010 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-run\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635064 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcm2w\" (UniqueName: \"kubernetes.io/projected/8a2f5eb4-3eff-4449-829b-2701ab9b6965-kube-api-access-qcm2w\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635087 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-sys\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635108 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-run\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635141 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635186 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-combined-ca-bundle\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635217 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-brick\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.640826 master-0 kubenswrapper[31456]: I0312 21:26:35.635466 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a2f5eb4-3eff-4449-829b-2701ab9b6965-etc-machine-id\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.668942 master-0 kubenswrapper[31456]: I0312 21:26:35.667580 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.670749 master-0 kubenswrapper[31456]: I0312 21:26:35.670700 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data-custom\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.672197 master-0 kubenswrapper[31456]: I0312 21:26:35.671416 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-scripts\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.680315 master-0 kubenswrapper[31456]: I0312 21:26:35.680134 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.687898 master-0 kubenswrapper[31456]: I0312 21:26:35.685972 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cfd77ccd9-2nhnf"] Mar 12 21:26:35.687898 master-0 kubenswrapper[31456]: I0312 21:26:35.686220 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerName="dnsmasq-dns" containerID="cri-o://369b9c611de917c4d5b84766274b2790f9276d0fe6cf2b893b2980d4e79c0e80" gracePeriod=10 Mar 12 21:26:35.689202 master-0 kubenswrapper[31456]: I0312 21:26:35.688978 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:35.694511 master-0 kubenswrapper[31456]: I0312 21:26:35.693380 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcm2w\" (UniqueName: \"kubernetes.io/projected/8a2f5eb4-3eff-4449-829b-2701ab9b6965-kube-api-access-qcm2w\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.723839 master-0 kubenswrapper[31456]: I0312 21:26:35.722490 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-669f6b88bf-rkg8p"] Mar 12 21:26:35.728347 master-0 kubenswrapper[31456]: I0312 21:26:35.727226 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738255 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-nvme\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738303 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-run\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738339 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-sys\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738357 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-run\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738393 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738423 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-combined-ca-bundle\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738444 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-brick\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738476 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-brick\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738494 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-iscsi\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738506 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-nvme\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738544 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data-custom\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738568 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-scripts\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738590 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.738586 master-0 kubenswrapper[31456]: I0312 21:26:35.738630 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-scripts\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738655 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-nvme\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738679 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-lib-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738702 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-sys\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738721 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-machine-id\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738738 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-dev\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738755 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68vtz\" (UniqueName: \"kubernetes.io/projected/30465684-0661-4306-8903-d8aa99f95fd7-kube-api-access-68vtz\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738780 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-lib-modules\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738795 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-machine-id\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738849 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-iscsi\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738874 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738899 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data-custom\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738925 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-lib-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738943 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738970 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-lib-modules\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.738991 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-dev\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.739017 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-combined-ca-bundle\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.739196 master-0 kubenswrapper[31456]: I0312 21:26:35.739040 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6blsv\" (UniqueName: \"kubernetes.io/projected/87e93241-daea-4fbc-b947-8edb8b8ea521-kube-api-access-6blsv\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.741840 master-0 kubenswrapper[31456]: I0312 21:26:35.741038 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-dev\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.741840 master-0 kubenswrapper[31456]: I0312 21:26:35.741235 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-brick\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.741840 master-0 kubenswrapper[31456]: I0312 21:26:35.741277 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-brick\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.741840 master-0 kubenswrapper[31456]: I0312 21:26:35.741300 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-iscsi\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.743824 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-nvme\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.744305 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.744393 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-lib-modules\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.744495 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-lib-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.744534 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746073 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data-custom\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746168 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-lib-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746198 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-sys\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746223 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-machine-id\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746251 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-sys\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746272 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-run\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746299 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-machine-id\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746699 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746752 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-run\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746787 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-dev\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746797 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-lib-modules\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.746837 master-0 kubenswrapper[31456]: I0312 21:26:35.746830 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-iscsi\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.747628 master-0 kubenswrapper[31456]: I0312 21:26:35.747416 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-scripts\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.753062 master-0 kubenswrapper[31456]: I0312 21:26:35.750310 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-combined-ca-bundle\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.753781 master-0 kubenswrapper[31456]: I0312 21:26:35.753746 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.758107 master-0 kubenswrapper[31456]: I0312 21:26:35.758073 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-combined-ca-bundle\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.766543 master-0 kubenswrapper[31456]: I0312 21:26:35.760406 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-669f6b88bf-rkg8p"] Mar 12 21:26:35.766543 master-0 kubenswrapper[31456]: I0312 21:26:35.761352 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-scripts\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.766543 master-0 kubenswrapper[31456]: I0312 21:26:35.761601 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:35.766543 master-0 kubenswrapper[31456]: I0312 21:26:35.761855 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data-custom\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.771121 master-0 kubenswrapper[31456]: I0312 21:26:35.771087 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6blsv\" (UniqueName: \"kubernetes.io/projected/87e93241-daea-4fbc-b947-8edb8b8ea521-kube-api-access-6blsv\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.772551 master-0 kubenswrapper[31456]: I0312 21:26:35.772484 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68vtz\" (UniqueName: \"kubernetes.io/projected/30465684-0661-4306-8903-d8aa99f95fd7-kube-api-access-68vtz\") pod \"cinder-7fa7f-backup-0\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.804442 master-0 kubenswrapper[31456]: I0312 21:26:35.804355 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:35.842104 master-0 kubenswrapper[31456]: I0312 21:26:35.842048 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-config\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.842367 master-0 kubenswrapper[31456]: I0312 21:26:35.842217 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-nb\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.842367 master-0 kubenswrapper[31456]: I0312 21:26:35.842253 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-svc\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.842367 master-0 kubenswrapper[31456]: I0312 21:26:35.842272 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqx5\" (UniqueName: \"kubernetes.io/projected/938fd693-cfad-4dfe-910d-4d5425053d75-kube-api-access-tzqx5\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.842367 master-0 kubenswrapper[31456]: I0312 21:26:35.842319 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-swift-storage-0\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.842367 master-0 kubenswrapper[31456]: I0312 21:26:35.842338 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-sb\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.880861 master-0 kubenswrapper[31456]: I0312 21:26:35.880798 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:35.887253 master-0 kubenswrapper[31456]: I0312 21:26:35.887169 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:35.889191 master-0 kubenswrapper[31456]: I0312 21:26:35.889163 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:35.891386 master-0 kubenswrapper[31456]: I0312 21:26:35.891331 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-api-config-data" Mar 12 21:26:35.906891 master-0 kubenswrapper[31456]: I0312 21:26:35.906794 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:35.945844 master-0 kubenswrapper[31456]: I0312 21:26:35.945772 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-nb\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.949025 master-0 kubenswrapper[31456]: I0312 21:26:35.948783 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-svc\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.949199 master-0 kubenswrapper[31456]: I0312 21:26:35.949163 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqx5\" (UniqueName: \"kubernetes.io/projected/938fd693-cfad-4dfe-910d-4d5425053d75-kube-api-access-tzqx5\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.949583 master-0 kubenswrapper[31456]: I0312 21:26:35.949505 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-swift-storage-0\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.951136 master-0 kubenswrapper[31456]: I0312 21:26:35.951083 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-sb\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.952021 master-0 kubenswrapper[31456]: I0312 21:26:35.951970 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-config\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.959579 master-0 kubenswrapper[31456]: I0312 21:26:35.959425 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-config\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.961205 master-0 kubenswrapper[31456]: I0312 21:26:35.961138 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-sb\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.961281 master-0 kubenswrapper[31456]: I0312 21:26:35.961227 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-swift-storage-0\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.962945 master-0 kubenswrapper[31456]: I0312 21:26:35.962925 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-svc\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.970592 master-0 kubenswrapper[31456]: I0312 21:26:35.970557 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqx5\" (UniqueName: \"kubernetes.io/projected/938fd693-cfad-4dfe-910d-4d5425053d75-kube-api-access-tzqx5\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:35.972080 master-0 kubenswrapper[31456]: I0312 21:26:35.972045 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-nb\") pod \"dnsmasq-dns-669f6b88bf-rkg8p\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:36.017780 master-0 kubenswrapper[31456]: I0312 21:26:36.017713 31456 generic.go:334] "Generic (PLEG): container finished" podID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerID="369b9c611de917c4d5b84766274b2790f9276d0fe6cf2b893b2980d4e79c0e80" exitCode=0 Mar 12 21:26:36.017780 master-0 kubenswrapper[31456]: I0312 21:26:36.017777 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" event={"ID":"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9","Type":"ContainerDied","Data":"369b9c611de917c4d5b84766274b2790f9276d0fe6cf2b893b2980d4e79c0e80"} Mar 12 21:26:36.054369 master-0 kubenswrapper[31456]: I0312 21:26:36.054315 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-logs\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.054528 master-0 kubenswrapper[31456]: I0312 21:26:36.054406 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsrn\" (UniqueName: \"kubernetes.io/projected/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-kube-api-access-4lsrn\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.054528 master-0 kubenswrapper[31456]: I0312 21:26:36.054476 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-combined-ca-bundle\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.054528 master-0 kubenswrapper[31456]: I0312 21:26:36.054525 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.054635 master-0 kubenswrapper[31456]: I0312 21:26:36.054570 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-scripts\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.054635 master-0 kubenswrapper[31456]: I0312 21:26:36.054603 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data-custom\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.054635 master-0 kubenswrapper[31456]: I0312 21:26:36.054623 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-etc-machine-id\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159254 master-0 kubenswrapper[31456]: I0312 21:26:36.159030 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159254 master-0 kubenswrapper[31456]: I0312 21:26:36.159115 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-scripts\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159254 master-0 kubenswrapper[31456]: I0312 21:26:36.159141 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data-custom\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159254 master-0 kubenswrapper[31456]: I0312 21:26:36.159161 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-etc-machine-id\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159254 master-0 kubenswrapper[31456]: I0312 21:26:36.159249 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-logs\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159607 master-0 kubenswrapper[31456]: I0312 21:26:36.159298 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsrn\" (UniqueName: \"kubernetes.io/projected/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-kube-api-access-4lsrn\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.159607 master-0 kubenswrapper[31456]: I0312 21:26:36.159344 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-combined-ca-bundle\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.175833 master-0 kubenswrapper[31456]: I0312 21:26:36.175383 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-logs\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.175833 master-0 kubenswrapper[31456]: I0312 21:26:36.175406 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.175833 master-0 kubenswrapper[31456]: I0312 21:26:36.175476 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-etc-machine-id\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.198667 master-0 kubenswrapper[31456]: I0312 21:26:36.195593 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsrn\" (UniqueName: \"kubernetes.io/projected/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-kube-api-access-4lsrn\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.228501 master-0 kubenswrapper[31456]: I0312 21:26:36.228450 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data-custom\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.229555 master-0 kubenswrapper[31456]: I0312 21:26:36.229505 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-combined-ca-bundle\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.230328 master-0 kubenswrapper[31456]: I0312 21:26:36.230275 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-scripts\") pod \"cinder-7fa7f-api-0\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.266641 master-0 kubenswrapper[31456]: I0312 21:26:36.266588 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:36.337753 master-0 kubenswrapper[31456]: I0312 21:26:36.337066 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:36.388630 master-0 kubenswrapper[31456]: I0312 21:26:36.380226 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:36.463425 master-0 kubenswrapper[31456]: I0312 21:26:36.463359 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:36.485125 master-0 kubenswrapper[31456]: I0312 21:26:36.483115 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-sb\") pod \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " Mar 12 21:26:36.485125 master-0 kubenswrapper[31456]: I0312 21:26:36.483182 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-nb\") pod \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " Mar 12 21:26:36.485125 master-0 kubenswrapper[31456]: I0312 21:26:36.483263 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhdf7\" (UniqueName: \"kubernetes.io/projected/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-kube-api-access-hhdf7\") pod \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " Mar 12 21:26:36.485125 master-0 kubenswrapper[31456]: I0312 21:26:36.483323 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-svc\") pod \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " Mar 12 21:26:36.485125 master-0 kubenswrapper[31456]: I0312 21:26:36.483343 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-swift-storage-0\") pod \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " Mar 12 21:26:36.485125 master-0 kubenswrapper[31456]: I0312 21:26:36.483505 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-config\") pod \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\" (UID: \"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9\") " Mar 12 21:26:36.504921 master-0 kubenswrapper[31456]: I0312 21:26:36.504794 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-kube-api-access-hhdf7" (OuterVolumeSpecName: "kube-api-access-hhdf7") pod "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" (UID: "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9"). InnerVolumeSpecName "kube-api-access-hhdf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:36.511369 master-0 kubenswrapper[31456]: I0312 21:26:36.508579 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:36.538172 master-0 kubenswrapper[31456]: I0312 21:26:36.537735 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" (UID: "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:36.543671 master-0 kubenswrapper[31456]: I0312 21:26:36.543599 31456 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 21:26:36.565977 master-0 kubenswrapper[31456]: I0312 21:26:36.565791 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" (UID: "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:36.571691 master-0 kubenswrapper[31456]: I0312 21:26:36.571627 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" (UID: "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:36.588354 master-0 kubenswrapper[31456]: I0312 21:26:36.588298 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:36.588354 master-0 kubenswrapper[31456]: I0312 21:26:36.588343 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhdf7\" (UniqueName: \"kubernetes.io/projected/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-kube-api-access-hhdf7\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:36.588501 master-0 kubenswrapper[31456]: I0312 21:26:36.588403 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:36.588501 master-0 kubenswrapper[31456]: I0312 21:26:36.588419 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:36.618857 master-0 kubenswrapper[31456]: I0312 21:26:36.618623 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" (UID: "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:36.632384 master-0 kubenswrapper[31456]: I0312 21:26:36.632329 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-config" (OuterVolumeSpecName: "config") pod "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" (UID: "f1aad20c-98be-4f2f-bf8c-b0433efb8ab9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:36.691333 master-0 kubenswrapper[31456]: I0312 21:26:36.690588 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:36.691333 master-0 kubenswrapper[31456]: I0312 21:26:36.690651 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:36.873955 master-0 kubenswrapper[31456]: I0312 21:26:36.873792 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:36.922909 master-0 kubenswrapper[31456]: I0312 21:26:36.920971 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-669f6b88bf-rkg8p"] Mar 12 21:26:37.021564 master-0 kubenswrapper[31456]: I0312 21:26:37.021518 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:37.042963 master-0 kubenswrapper[31456]: I0312 21:26:37.042921 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"87e93241-daea-4fbc-b947-8edb8b8ea521","Type":"ContainerStarted","Data":"c1ca6c5970ca499d66e05145b24948a61f36ce64b26eeccd04f0e09c94ffe0ad"} Mar 12 21:26:37.045236 master-0 kubenswrapper[31456]: I0312 21:26:37.045103 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb","Type":"ContainerStarted","Data":"0a66134ba389cc261c151d99e516dda2aeb98c4ecf655b0a704f98de961add1f"} Mar 12 21:26:37.053613 master-0 kubenswrapper[31456]: I0312 21:26:37.053498 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"8a2f5eb4-3eff-4449-829b-2701ab9b6965","Type":"ContainerStarted","Data":"1fb8f1924050d6fac66adf075d11044e428802cf9d6f8f8f393b6b1d908d1fa7"} Mar 12 21:26:37.060332 master-0 kubenswrapper[31456]: I0312 21:26:37.060240 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" Mar 12 21:26:37.060795 master-0 kubenswrapper[31456]: I0312 21:26:37.060760 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cfd77ccd9-2nhnf" event={"ID":"f1aad20c-98be-4f2f-bf8c-b0433efb8ab9","Type":"ContainerDied","Data":"584977f008e8a6587ca5b2e9468031e7e390ffaac6e656669fe84bcc490537fc"} Mar 12 21:26:37.060906 master-0 kubenswrapper[31456]: I0312 21:26:37.060830 31456 scope.go:117] "RemoveContainer" containerID="369b9c611de917c4d5b84766274b2790f9276d0fe6cf2b893b2980d4e79c0e80" Mar 12 21:26:37.064022 master-0 kubenswrapper[31456]: I0312 21:26:37.063960 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"30465684-0661-4306-8903-d8aa99f95fd7","Type":"ContainerStarted","Data":"3e15259a2dd1c79bed7d4844947ed4242703f3ed22a6415878015704b0e3287d"} Mar 12 21:26:37.067034 master-0 kubenswrapper[31456]: I0312 21:26:37.066970 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" event={"ID":"938fd693-cfad-4dfe-910d-4d5425053d75","Type":"ContainerStarted","Data":"c1a7d2deee438e52dc6c52919259adc891211c863e11b57cc4a816f3d125f0d0"} Mar 12 21:26:37.099917 master-0 kubenswrapper[31456]: I0312 21:26:37.099832 31456 scope.go:117] "RemoveContainer" containerID="8ef283b558a8fa0b46e6854df2a33eccfd9b960c265628ae9e4e7845362a6a50" Mar 12 21:26:37.151850 master-0 kubenswrapper[31456]: I0312 21:26:37.150793 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cfd77ccd9-2nhnf"] Mar 12 21:26:37.159974 master-0 kubenswrapper[31456]: I0312 21:26:37.159890 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cfd77ccd9-2nhnf"] Mar 12 21:26:37.199618 master-0 kubenswrapper[31456]: I0312 21:26:37.199484 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" path="/var/lib/kubelet/pods/f1aad20c-98be-4f2f-bf8c-b0433efb8ab9/volumes" Mar 12 21:26:38.104399 master-0 kubenswrapper[31456]: I0312 21:26:38.102001 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb","Type":"ContainerStarted","Data":"c34c2d3d85dad067e1714f21d40fd44ec510d8b1b3f2f078818dc94b8ef898b1"} Mar 12 21:26:38.154835 master-0 kubenswrapper[31456]: I0312 21:26:38.154504 31456 generic.go:334] "Generic (PLEG): container finished" podID="938fd693-cfad-4dfe-910d-4d5425053d75" containerID="8ea33eca9a4603aa9f6b064070b24c9fbee7c4fd1328df37567b4921c8b51a7e" exitCode=0 Mar 12 21:26:38.154835 master-0 kubenswrapper[31456]: I0312 21:26:38.154746 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" event={"ID":"938fd693-cfad-4dfe-910d-4d5425053d75","Type":"ContainerDied","Data":"8ea33eca9a4603aa9f6b064070b24c9fbee7c4fd1328df37567b4921c8b51a7e"} Mar 12 21:26:38.733969 master-0 kubenswrapper[31456]: I0312 21:26:38.732297 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:39.204971 master-0 kubenswrapper[31456]: I0312 21:26:39.204031 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" event={"ID":"938fd693-cfad-4dfe-910d-4d5425053d75","Type":"ContainerStarted","Data":"fbe423a38874ec471a4945c741e197952789ea2318f72b7e77874dfe79f6dd8d"} Mar 12 21:26:39.204971 master-0 kubenswrapper[31456]: I0312 21:26:39.204077 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:39.208703 master-0 kubenswrapper[31456]: I0312 21:26:39.208622 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"87e93241-daea-4fbc-b947-8edb8b8ea521","Type":"ContainerStarted","Data":"787e2b678b94d4b263056b4730a580a0edfede9ddde73ec39dd298914e699c9d"} Mar 12 21:26:39.208703 master-0 kubenswrapper[31456]: I0312 21:26:39.208671 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"87e93241-daea-4fbc-b947-8edb8b8ea521","Type":"ContainerStarted","Data":"3e73dd87325fd97b92f555444b1fbf4163313351b2fc93de5220677674539714"} Mar 12 21:26:39.215562 master-0 kubenswrapper[31456]: I0312 21:26:39.215512 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb","Type":"ContainerStarted","Data":"403b90ec5dfcef764d4a83fbf5130171248f5d90498d607dce29da843ad25993"} Mar 12 21:26:39.215648 master-0 kubenswrapper[31456]: I0312 21:26:39.215592 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:39.215648 master-0 kubenswrapper[31456]: I0312 21:26:39.215609 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-api-0" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-7fa7f-api-log" containerID="cri-o://c34c2d3d85dad067e1714f21d40fd44ec510d8b1b3f2f078818dc94b8ef898b1" gracePeriod=30 Mar 12 21:26:39.215648 master-0 kubenswrapper[31456]: I0312 21:26:39.215640 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-api-0" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-api" containerID="cri-o://403b90ec5dfcef764d4a83fbf5130171248f5d90498d607dce29da843ad25993" gracePeriod=30 Mar 12 21:26:39.225006 master-0 kubenswrapper[31456]: I0312 21:26:39.219456 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"8a2f5eb4-3eff-4449-829b-2701ab9b6965","Type":"ContainerStarted","Data":"75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7"} Mar 12 21:26:39.230999 master-0 kubenswrapper[31456]: I0312 21:26:39.230880 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"30465684-0661-4306-8903-d8aa99f95fd7","Type":"ContainerStarted","Data":"99e1a7f7eb742af34c9dc5d5601c8e98d7b3792e2ab3e49ce401e0f211575ebe"} Mar 12 21:26:39.230999 master-0 kubenswrapper[31456]: I0312 21:26:39.230934 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"30465684-0661-4306-8903-d8aa99f95fd7","Type":"ContainerStarted","Data":"c57024656546ec8e36c2613e9b153874dade0ea43e1d084b92484464205d1a1b"} Mar 12 21:26:39.360858 master-0 kubenswrapper[31456]: I0312 21:26:39.355648 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" podStartSLOduration=3.313689092 podStartE2EDuration="4.35562493s" podCreationTimestamp="2026-03-12 21:26:35 +0000 UTC" firstStartedPulling="2026-03-12 21:26:36.546634134 +0000 UTC m=+1057.621239462" lastFinishedPulling="2026-03-12 21:26:37.588569972 +0000 UTC m=+1058.663175300" observedRunningTime="2026-03-12 21:26:39.337967072 +0000 UTC m=+1060.412572400" watchObservedRunningTime="2026-03-12 21:26:39.35562493 +0000 UTC m=+1060.430230258" Mar 12 21:26:39.384843 master-0 kubenswrapper[31456]: I0312 21:26:39.379731 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-backup-0" podStartSLOduration=3.359267955 podStartE2EDuration="4.379711133s" podCreationTimestamp="2026-03-12 21:26:35 +0000 UTC" firstStartedPulling="2026-03-12 21:26:36.875070143 +0000 UTC m=+1057.949675471" lastFinishedPulling="2026-03-12 21:26:37.895513321 +0000 UTC m=+1058.970118649" observedRunningTime="2026-03-12 21:26:39.369578368 +0000 UTC m=+1060.444183706" watchObservedRunningTime="2026-03-12 21:26:39.379711133 +0000 UTC m=+1060.454316461" Mar 12 21:26:39.409892 master-0 kubenswrapper[31456]: I0312 21:26:39.407285 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-api-0" podStartSLOduration=4.40726314 podStartE2EDuration="4.40726314s" podCreationTimestamp="2026-03-12 21:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:39.397652787 +0000 UTC m=+1060.472258115" watchObservedRunningTime="2026-03-12 21:26:39.40726314 +0000 UTC m=+1060.481868468" Mar 12 21:26:39.441827 master-0 kubenswrapper[31456]: I0312 21:26:39.434446 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" podStartSLOduration=4.434425477 podStartE2EDuration="4.434425477s" podCreationTimestamp="2026-03-12 21:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:39.430212795 +0000 UTC m=+1060.504818133" watchObservedRunningTime="2026-03-12 21:26:39.434425477 +0000 UTC m=+1060.509030805" Mar 12 21:26:40.244885 master-0 kubenswrapper[31456]: I0312 21:26:40.244824 31456 generic.go:334] "Generic (PLEG): container finished" podID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerID="2b1cb764ba0198fdcc3a1a8ca42b9161a896f0fbd20c21a3fe120df1d21a60f3" exitCode=0 Mar 12 21:26:40.245304 master-0 kubenswrapper[31456]: I0312 21:26:40.244886 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-cf2v5" event={"ID":"64b63a16-1c32-45a8-92f8-8ce00c2c6be8","Type":"ContainerDied","Data":"2b1cb764ba0198fdcc3a1a8ca42b9161a896f0fbd20c21a3fe120df1d21a60f3"} Mar 12 21:26:40.246928 master-0 kubenswrapper[31456]: I0312 21:26:40.246897 31456 generic.go:334] "Generic (PLEG): container finished" podID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerID="403b90ec5dfcef764d4a83fbf5130171248f5d90498d607dce29da843ad25993" exitCode=0 Mar 12 21:26:40.246928 master-0 kubenswrapper[31456]: I0312 21:26:40.246923 31456 generic.go:334] "Generic (PLEG): container finished" podID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerID="c34c2d3d85dad067e1714f21d40fd44ec510d8b1b3f2f078818dc94b8ef898b1" exitCode=143 Mar 12 21:26:40.247073 master-0 kubenswrapper[31456]: I0312 21:26:40.246953 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb","Type":"ContainerDied","Data":"403b90ec5dfcef764d4a83fbf5130171248f5d90498d607dce29da843ad25993"} Mar 12 21:26:40.247073 master-0 kubenswrapper[31456]: I0312 21:26:40.246996 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb","Type":"ContainerDied","Data":"c34c2d3d85dad067e1714f21d40fd44ec510d8b1b3f2f078818dc94b8ef898b1"} Mar 12 21:26:40.247073 master-0 kubenswrapper[31456]: I0312 21:26:40.247009 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb","Type":"ContainerDied","Data":"0a66134ba389cc261c151d99e516dda2aeb98c4ecf655b0a704f98de961add1f"} Mar 12 21:26:40.247073 master-0 kubenswrapper[31456]: I0312 21:26:40.247018 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a66134ba389cc261c151d99e516dda2aeb98c4ecf655b0a704f98de961add1f" Mar 12 21:26:40.252184 master-0 kubenswrapper[31456]: I0312 21:26:40.250262 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:40.252324 master-0 kubenswrapper[31456]: I0312 21:26:40.252276 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"8a2f5eb4-3eff-4449-829b-2701ab9b6965","Type":"ContainerStarted","Data":"44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289"} Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351265 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsrn\" (UniqueName: \"kubernetes.io/projected/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-kube-api-access-4lsrn\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351417 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-scripts\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351496 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-logs\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351614 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-combined-ca-bundle\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351694 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351726 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data-custom\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.351779 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-etc-machine-id\") pod \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\" (UID: \"79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb\") " Mar 12 21:26:40.353156 master-0 kubenswrapper[31456]: I0312 21:26:40.352319 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-logs" (OuterVolumeSpecName: "logs") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:40.372951 master-0 kubenswrapper[31456]: I0312 21:26:40.371240 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.374762 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-kube-api-access-4lsrn" (OuterVolumeSpecName: "kube-api-access-4lsrn") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "kube-api-access-4lsrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.378460 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.383063 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.383101 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.383114 31456 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.383124 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lsrn\" (UniqueName: \"kubernetes.io/projected/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-kube-api-access-4lsrn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.386869 master-0 kubenswrapper[31456]: I0312 21:26:40.386725 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-scripts" (OuterVolumeSpecName: "scripts") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:40.426449 master-0 kubenswrapper[31456]: I0312 21:26:40.426197 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-scheduler-0" podStartSLOduration=4.394655826 podStartE2EDuration="5.426174241s" podCreationTimestamp="2026-03-12 21:26:35 +0000 UTC" firstStartedPulling="2026-03-12 21:26:36.54355269 +0000 UTC m=+1057.618158018" lastFinishedPulling="2026-03-12 21:26:37.575071105 +0000 UTC m=+1058.649676433" observedRunningTime="2026-03-12 21:26:40.344149545 +0000 UTC m=+1061.418754873" watchObservedRunningTime="2026-03-12 21:26:40.426174241 +0000 UTC m=+1061.500779559" Mar 12 21:26:40.449018 master-0 kubenswrapper[31456]: I0312 21:26:40.445958 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:40.482584 master-0 kubenswrapper[31456]: I0312 21:26:40.482034 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data" (OuterVolumeSpecName: "config-data") pod "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" (UID: "79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:40.486611 master-0 kubenswrapper[31456]: I0312 21:26:40.485792 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.486611 master-0 kubenswrapper[31456]: I0312 21:26:40.485833 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.486611 master-0 kubenswrapper[31456]: I0312 21:26:40.485844 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:40.802946 master-0 kubenswrapper[31456]: I0312 21:26:40.801782 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:40.806834 master-0 kubenswrapper[31456]: I0312 21:26:40.805622 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:40.881834 master-0 kubenswrapper[31456]: I0312 21:26:40.881583 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:41.262334 master-0 kubenswrapper[31456]: I0312 21:26:41.262263 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.311836 master-0 kubenswrapper[31456]: I0312 21:26:41.308856 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:41.332836 master-0 kubenswrapper[31456]: I0312 21:26:41.332310 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.344864 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: E0312 21:26:41.345325 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-7fa7f-api-log" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345339 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-7fa7f-api-log" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: E0312 21:26:41.345354 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-api" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345360 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-api" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: E0312 21:26:41.345375 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerName="init" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345382 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerName="init" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: E0312 21:26:41.345406 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerName="dnsmasq-dns" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345412 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerName="dnsmasq-dns" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345633 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1aad20c-98be-4f2f-bf8c-b0433efb8ab9" containerName="dnsmasq-dns" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345662 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-api" Mar 12 21:26:41.345843 master-0 kubenswrapper[31456]: I0312 21:26:41.345681 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" containerName="cinder-7fa7f-api-log" Mar 12 21:26:41.349827 master-0 kubenswrapper[31456]: I0312 21:26:41.346748 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.353830 master-0 kubenswrapper[31456]: I0312 21:26:41.349908 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-api-config-data" Mar 12 21:26:41.353830 master-0 kubenswrapper[31456]: I0312 21:26:41.350156 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 21:26:41.353830 master-0 kubenswrapper[31456]: I0312 21:26:41.350409 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 21:26:41.370281 master-0 kubenswrapper[31456]: I0312 21:26:41.369923 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:41.519906 master-0 kubenswrapper[31456]: I0312 21:26:41.519376 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwrw\" (UniqueName: \"kubernetes.io/projected/ae2814de-f43e-4dac-a9bd-54349d25a331-kube-api-access-znwrw\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.519906 master-0 kubenswrapper[31456]: I0312 21:26:41.519460 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-combined-ca-bundle\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.520167 master-0 kubenswrapper[31456]: I0312 21:26:41.520023 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-internal-tls-certs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.524829 master-0 kubenswrapper[31456]: I0312 21:26:41.520238 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-config-data-custom\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.524829 master-0 kubenswrapper[31456]: I0312 21:26:41.520662 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-scripts\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.524829 master-0 kubenswrapper[31456]: I0312 21:26:41.520863 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-config-data\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.524829 master-0 kubenswrapper[31456]: I0312 21:26:41.520943 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-public-tls-certs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.524829 master-0 kubenswrapper[31456]: I0312 21:26:41.521041 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2814de-f43e-4dac-a9bd-54349d25a331-logs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.524829 master-0 kubenswrapper[31456]: I0312 21:26:41.521105 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae2814de-f43e-4dac-a9bd-54349d25a331-etc-machine-id\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.628150 master-0 kubenswrapper[31456]: I0312 21:26:41.628065 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-combined-ca-bundle\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.628541 master-0 kubenswrapper[31456]: I0312 21:26:41.628503 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-internal-tls-certs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.628618 master-0 kubenswrapper[31456]: I0312 21:26:41.628572 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-config-data-custom\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.628883 master-0 kubenswrapper[31456]: I0312 21:26:41.628856 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-scripts\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.628964 master-0 kubenswrapper[31456]: I0312 21:26:41.628916 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-config-data\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.629019 master-0 kubenswrapper[31456]: I0312 21:26:41.628959 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-public-tls-certs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.629068 master-0 kubenswrapper[31456]: I0312 21:26:41.629055 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2814de-f43e-4dac-a9bd-54349d25a331-logs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.629121 master-0 kubenswrapper[31456]: I0312 21:26:41.629106 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae2814de-f43e-4dac-a9bd-54349d25a331-etc-machine-id\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.629169 master-0 kubenswrapper[31456]: I0312 21:26:41.629159 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znwrw\" (UniqueName: \"kubernetes.io/projected/ae2814de-f43e-4dac-a9bd-54349d25a331-kube-api-access-znwrw\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.630239 master-0 kubenswrapper[31456]: I0312 21:26:41.630152 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae2814de-f43e-4dac-a9bd-54349d25a331-logs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.630239 master-0 kubenswrapper[31456]: I0312 21:26:41.630217 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae2814de-f43e-4dac-a9bd-54349d25a331-etc-machine-id\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.632153 master-0 kubenswrapper[31456]: I0312 21:26:41.632098 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-internal-tls-certs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.639358 master-0 kubenswrapper[31456]: I0312 21:26:41.639250 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-config-data-custom\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.641381 master-0 kubenswrapper[31456]: I0312 21:26:41.641338 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-public-tls-certs\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.641693 master-0 kubenswrapper[31456]: I0312 21:26:41.641629 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-config-data\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.642356 master-0 kubenswrapper[31456]: I0312 21:26:41.642307 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-scripts\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.644268 master-0 kubenswrapper[31456]: I0312 21:26:41.644225 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae2814de-f43e-4dac-a9bd-54349d25a331-combined-ca-bundle\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.650192 master-0 kubenswrapper[31456]: I0312 21:26:41.650151 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znwrw\" (UniqueName: \"kubernetes.io/projected/ae2814de-f43e-4dac-a9bd-54349d25a331-kube-api-access-znwrw\") pod \"cinder-7fa7f-api-0\" (UID: \"ae2814de-f43e-4dac-a9bd-54349d25a331\") " pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.673413 master-0 kubenswrapper[31456]: I0312 21:26:41.673352 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:41.919830 master-0 kubenswrapper[31456]: I0312 21:26:41.919238 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:42.042829 master-0 kubenswrapper[31456]: I0312 21:26:42.041933 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data-merged\") pod \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " Mar 12 21:26:42.042829 master-0 kubenswrapper[31456]: I0312 21:26:42.041994 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf5mf\" (UniqueName: \"kubernetes.io/projected/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-kube-api-access-cf5mf\") pod \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " Mar 12 21:26:42.042829 master-0 kubenswrapper[31456]: I0312 21:26:42.042169 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-combined-ca-bundle\") pod \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " Mar 12 21:26:42.042829 master-0 kubenswrapper[31456]: I0312 21:26:42.042201 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-scripts\") pod \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " Mar 12 21:26:42.042829 master-0 kubenswrapper[31456]: I0312 21:26:42.042299 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data\") pod \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " Mar 12 21:26:42.042829 master-0 kubenswrapper[31456]: I0312 21:26:42.042325 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-etc-podinfo\") pod \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\" (UID: \"64b63a16-1c32-45a8-92f8-8ce00c2c6be8\") " Mar 12 21:26:42.045177 master-0 kubenswrapper[31456]: I0312 21:26:42.045130 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "64b63a16-1c32-45a8-92f8-8ce00c2c6be8" (UID: "64b63a16-1c32-45a8-92f8-8ce00c2c6be8"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:26:42.054830 master-0 kubenswrapper[31456]: I0312 21:26:42.046206 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "64b63a16-1c32-45a8-92f8-8ce00c2c6be8" (UID: "64b63a16-1c32-45a8-92f8-8ce00c2c6be8"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 21:26:42.054830 master-0 kubenswrapper[31456]: I0312 21:26:42.049717 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-kube-api-access-cf5mf" (OuterVolumeSpecName: "kube-api-access-cf5mf") pod "64b63a16-1c32-45a8-92f8-8ce00c2c6be8" (UID: "64b63a16-1c32-45a8-92f8-8ce00c2c6be8"). InnerVolumeSpecName "kube-api-access-cf5mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:42.054830 master-0 kubenswrapper[31456]: I0312 21:26:42.053689 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-scripts" (OuterVolumeSpecName: "scripts") pod "64b63a16-1c32-45a8-92f8-8ce00c2c6be8" (UID: "64b63a16-1c32-45a8-92f8-8ce00c2c6be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:42.096845 master-0 kubenswrapper[31456]: I0312 21:26:42.089963 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data" (OuterVolumeSpecName: "config-data") pod "64b63a16-1c32-45a8-92f8-8ce00c2c6be8" (UID: "64b63a16-1c32-45a8-92f8-8ce00c2c6be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:42.175604 master-0 kubenswrapper[31456]: I0312 21:26:42.173563 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:42.175604 master-0 kubenswrapper[31456]: I0312 21:26:42.173609 31456 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:42.175604 master-0 kubenswrapper[31456]: I0312 21:26:42.173620 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:42.175604 master-0 kubenswrapper[31456]: I0312 21:26:42.173632 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf5mf\" (UniqueName: \"kubernetes.io/projected/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-kube-api-access-cf5mf\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:42.175604 master-0 kubenswrapper[31456]: I0312 21:26:42.173641 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:42.189092 master-0 kubenswrapper[31456]: I0312 21:26:42.187326 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64b63a16-1c32-45a8-92f8-8ce00c2c6be8" (UID: "64b63a16-1c32-45a8-92f8-8ce00c2c6be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:42.273466 master-0 kubenswrapper[31456]: I0312 21:26:42.273352 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-cf2v5" event={"ID":"64b63a16-1c32-45a8-92f8-8ce00c2c6be8","Type":"ContainerDied","Data":"66d13ea081c5deead09abb4d9389a8705082d23c82aa7fe9fd83539d181ca424"} Mar 12 21:26:42.273466 master-0 kubenswrapper[31456]: I0312 21:26:42.273405 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d13ea081c5deead09abb4d9389a8705082d23c82aa7fe9fd83539d181ca424" Mar 12 21:26:42.273466 master-0 kubenswrapper[31456]: I0312 21:26:42.273369 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-cf2v5" Mar 12 21:26:42.276464 master-0 kubenswrapper[31456]: I0312 21:26:42.276433 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b63a16-1c32-45a8-92f8-8ce00c2c6be8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:42.912831 master-0 kubenswrapper[31456]: I0312 21:26:42.908957 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-api-0"] Mar 12 21:26:43.195613 master-0 kubenswrapper[31456]: I0312 21:26:43.195554 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb" path="/var/lib/kubelet/pods/79f9c3b7-ff7e-43bb-a8ab-e84cb01704eb/volumes" Mar 12 21:26:43.246253 master-0 kubenswrapper[31456]: I0312 21:26:43.240524 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-68659c9b47-m44wq"] Mar 12 21:26:43.246253 master-0 kubenswrapper[31456]: E0312 21:26:43.241068 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerName="init" Mar 12 21:26:43.246253 master-0 kubenswrapper[31456]: I0312 21:26:43.241083 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerName="init" Mar 12 21:26:43.246253 master-0 kubenswrapper[31456]: E0312 21:26:43.241112 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerName="ironic-db-sync" Mar 12 21:26:43.246253 master-0 kubenswrapper[31456]: I0312 21:26:43.241119 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerName="ironic-db-sync" Mar 12 21:26:43.246253 master-0 kubenswrapper[31456]: I0312 21:26:43.241388 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b63a16-1c32-45a8-92f8-8ce00c2c6be8" containerName="ironic-db-sync" Mar 12 21:26:43.311207 master-0 kubenswrapper[31456]: I0312 21:26:43.308778 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-68659c9b47-m44wq"] Mar 12 21:26:43.311207 master-0 kubenswrapper[31456]: I0312 21:26:43.308912 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.311735 master-0 kubenswrapper[31456]: I0312 21:26:43.311489 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 12 21:26:43.330521 master-0 kubenswrapper[31456]: I0312 21:26:43.330280 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-jsnft"] Mar 12 21:26:43.331690 master-0 kubenswrapper[31456]: I0312 21:26:43.331645 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.403914 master-0 kubenswrapper[31456]: I0312 21:26:43.393225 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-jsnft"] Mar 12 21:26:43.429661 master-0 kubenswrapper[31456]: I0312 21:26:43.428114 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f0319b-6d84-4282-bbb5-9636e1b62647-combined-ca-bundle\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.429661 master-0 kubenswrapper[31456]: I0312 21:26:43.428239 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-operator-scripts\") pod \"ironic-inspector-db-create-jsnft\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.429661 master-0 kubenswrapper[31456]: I0312 21:26:43.428278 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8qq\" (UniqueName: \"kubernetes.io/projected/33f0319b-6d84-4282-bbb5-9636e1b62647-kube-api-access-9g8qq\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.429661 master-0 kubenswrapper[31456]: I0312 21:26:43.428331 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hrv\" (UniqueName: \"kubernetes.io/projected/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-kube-api-access-94hrv\") pod \"ironic-inspector-db-create-jsnft\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.429661 master-0 kubenswrapper[31456]: I0312 21:26:43.428373 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f0319b-6d84-4282-bbb5-9636e1b62647-config\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.458437 master-0 kubenswrapper[31456]: I0312 21:26:43.455680 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"ae2814de-f43e-4dac-a9bd-54349d25a331","Type":"ContainerStarted","Data":"2dadbc89d69e5ff64ed9e8da6c90e87b3dd6a20abc8bf7c4b55fb7dcc219c1bc"} Mar 12 21:26:43.532368 master-0 kubenswrapper[31456]: I0312 21:26:43.532318 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f0319b-6d84-4282-bbb5-9636e1b62647-combined-ca-bundle\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.541209 master-0 kubenswrapper[31456]: I0312 21:26:43.541009 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-operator-scripts\") pod \"ironic-inspector-db-create-jsnft\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.548150 master-0 kubenswrapper[31456]: I0312 21:26:43.548001 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g8qq\" (UniqueName: \"kubernetes.io/projected/33f0319b-6d84-4282-bbb5-9636e1b62647-kube-api-access-9g8qq\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.548485 master-0 kubenswrapper[31456]: I0312 21:26:43.548469 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hrv\" (UniqueName: \"kubernetes.io/projected/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-kube-api-access-94hrv\") pod \"ironic-inspector-db-create-jsnft\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.548642 master-0 kubenswrapper[31456]: I0312 21:26:43.548627 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f0319b-6d84-4282-bbb5-9636e1b62647-config\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.558908 master-0 kubenswrapper[31456]: I0312 21:26:43.558859 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/33f0319b-6d84-4282-bbb5-9636e1b62647-config\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.560195 master-0 kubenswrapper[31456]: I0312 21:26:43.560040 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-operator-scripts\") pod \"ironic-inspector-db-create-jsnft\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.574909 master-0 kubenswrapper[31456]: I0312 21:26:43.572003 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-e3e1-account-create-update-d66hf"] Mar 12 21:26:43.574909 master-0 kubenswrapper[31456]: I0312 21:26:43.573940 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.586494 master-0 kubenswrapper[31456]: I0312 21:26:43.579489 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 12 21:26:43.590435 master-0 kubenswrapper[31456]: I0312 21:26:43.587427 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f0319b-6d84-4282-bbb5-9636e1b62647-combined-ca-bundle\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.644833 master-0 kubenswrapper[31456]: I0312 21:26:43.638651 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hrv\" (UniqueName: \"kubernetes.io/projected/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-kube-api-access-94hrv\") pod \"ironic-inspector-db-create-jsnft\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.650834 master-0 kubenswrapper[31456]: I0312 21:26:43.650522 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g8qq\" (UniqueName: \"kubernetes.io/projected/33f0319b-6d84-4282-bbb5-9636e1b62647-kube-api-access-9g8qq\") pod \"ironic-neutron-agent-68659c9b47-m44wq\" (UID: \"33f0319b-6d84-4282-bbb5-9636e1b62647\") " pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.653522 master-0 kubenswrapper[31456]: I0312 21:26:43.653428 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvvx\" (UniqueName: \"kubernetes.io/projected/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-kube-api-access-xlvvx\") pod \"ironic-inspector-e3e1-account-create-update-d66hf\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.653618 master-0 kubenswrapper[31456]: I0312 21:26:43.653580 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-operator-scripts\") pod \"ironic-inspector-e3e1-account-create-update-d66hf\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.655025 master-0 kubenswrapper[31456]: I0312 21:26:43.655001 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-e3e1-account-create-update-d66hf"] Mar 12 21:26:43.691833 master-0 kubenswrapper[31456]: I0312 21:26:43.688162 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-669f6b88bf-rkg8p"] Mar 12 21:26:43.691833 master-0 kubenswrapper[31456]: I0312 21:26:43.688417 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" containerName="dnsmasq-dns" containerID="cri-o://fbe423a38874ec471a4945c741e197952789ea2318f72b7e77874dfe79f6dd8d" gracePeriod=10 Mar 12 21:26:43.694999 master-0 kubenswrapper[31456]: I0312 21:26:43.694959 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:43.742289 master-0 kubenswrapper[31456]: I0312 21:26:43.740947 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c46756b57-z2p86"] Mar 12 21:26:43.744025 master-0 kubenswrapper[31456]: I0312 21:26:43.743866 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.756940 master-0 kubenswrapper[31456]: I0312 21:26:43.755164 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-operator-scripts\") pod \"ironic-inspector-e3e1-account-create-update-d66hf\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.756940 master-0 kubenswrapper[31456]: I0312 21:26:43.755360 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvvx\" (UniqueName: \"kubernetes.io/projected/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-kube-api-access-xlvvx\") pod \"ironic-inspector-e3e1-account-create-update-d66hf\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.756940 master-0 kubenswrapper[31456]: I0312 21:26:43.756777 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-operator-scripts\") pod \"ironic-inspector-e3e1-account-create-update-d66hf\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.776055 master-0 kubenswrapper[31456]: I0312 21:26:43.774842 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:43.817578 master-0 kubenswrapper[31456]: I0312 21:26:43.816571 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvvx\" (UniqueName: \"kubernetes.io/projected/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-kube-api-access-xlvvx\") pod \"ironic-inspector-e3e1-account-create-update-d66hf\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:43.828918 master-0 kubenswrapper[31456]: I0312 21:26:43.826203 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:43.884374 master-0 kubenswrapper[31456]: I0312 21:26:43.878048 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-sb\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.884374 master-0 kubenswrapper[31456]: I0312 21:26:43.878154 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2ls\" (UniqueName: \"kubernetes.io/projected/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-kube-api-access-qt2ls\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.884374 master-0 kubenswrapper[31456]: I0312 21:26:43.878187 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-nb\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.884374 master-0 kubenswrapper[31456]: I0312 21:26:43.878257 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-svc\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.884374 master-0 kubenswrapper[31456]: I0312 21:26:43.878325 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-swift-storage-0\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.884374 master-0 kubenswrapper[31456]: I0312 21:26:43.878459 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-config\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.894570 master-0 kubenswrapper[31456]: I0312 21:26:43.890433 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c46756b57-z2p86"] Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.912861 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6fd7f8b47c-vnhs9"] Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.916301 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.920013 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.920368 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.920561 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.920657 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 12 21:26:43.923858 master-0 kubenswrapper[31456]: I0312 21:26:43.921346 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 12 21:26:43.958829 master-0 kubenswrapper[31456]: I0312 21:26:43.940316 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6fd7f8b47c-vnhs9"] Mar 12 21:26:43.983044 master-0 kubenswrapper[31456]: I0312 21:26:43.980929 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-config\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.983044 master-0 kubenswrapper[31456]: I0312 21:26:43.981022 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-sb\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.983044 master-0 kubenswrapper[31456]: I0312 21:26:43.981073 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2ls\" (UniqueName: \"kubernetes.io/projected/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-kube-api-access-qt2ls\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.983044 master-0 kubenswrapper[31456]: I0312 21:26:43.981111 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-nb\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.983044 master-0 kubenswrapper[31456]: I0312 21:26:43.981171 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-svc\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.983044 master-0 kubenswrapper[31456]: I0312 21:26:43.981246 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-swift-storage-0\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.985281 master-0 kubenswrapper[31456]: I0312 21:26:43.984735 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-swift-storage-0\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.989140 master-0 kubenswrapper[31456]: I0312 21:26:43.985565 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-config\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.989140 master-0 kubenswrapper[31456]: I0312 21:26:43.985778 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-nb\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.989140 master-0 kubenswrapper[31456]: I0312 21:26:43.985969 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-svc\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:43.989140 master-0 kubenswrapper[31456]: I0312 21:26:43.986331 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-sb\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:44.021698 master-0 kubenswrapper[31456]: I0312 21:26:44.000968 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:44.021698 master-0 kubenswrapper[31456]: I0312 21:26:44.009783 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2ls\" (UniqueName: \"kubernetes.io/projected/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-kube-api-access-qt2ls\") pod \"dnsmasq-dns-6c46756b57-z2p86\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:44.086930 master-0 kubenswrapper[31456]: I0312 21:26:44.086175 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-scripts\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.086930 master-0 kubenswrapper[31456]: I0312 21:26:44.086230 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da04713b-ad0b-4167-8fd7-59bbf482eff1-etc-podinfo\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.086930 master-0 kubenswrapper[31456]: I0312 21:26:44.086250 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-custom\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.086930 master-0 kubenswrapper[31456]: I0312 21:26:44.086305 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkb4\" (UniqueName: \"kubernetes.io/projected/da04713b-ad0b-4167-8fd7-59bbf482eff1-kube-api-access-dbkb4\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.086930 master-0 kubenswrapper[31456]: I0312 21:26:44.086346 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.086930 master-0 kubenswrapper[31456]: I0312 21:26:44.086787 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-combined-ca-bundle\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.087351 master-0 kubenswrapper[31456]: I0312 21:26:44.087268 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-merged\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.087351 master-0 kubenswrapper[31456]: I0312 21:26:44.087342 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-logs\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.189618 master-0 kubenswrapper[31456]: I0312 21:26:44.189449 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-scripts\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.195826 master-0 kubenswrapper[31456]: I0312 21:26:44.192141 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da04713b-ad0b-4167-8fd7-59bbf482eff1-etc-podinfo\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.195826 master-0 kubenswrapper[31456]: I0312 21:26:44.192230 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-custom\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.198653 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da04713b-ad0b-4167-8fd7-59bbf482eff1-etc-podinfo\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.199098 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-scripts\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.209178 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-custom\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.242539 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkb4\" (UniqueName: \"kubernetes.io/projected/da04713b-ad0b-4167-8fd7-59bbf482eff1-kube-api-access-dbkb4\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.242684 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.242775 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-combined-ca-bundle\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.248639 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-merged\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.248729 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-logs\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.249934 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-logs\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.262176 master-0 kubenswrapper[31456]: I0312 21:26:44.253434 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-merged\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.274749 master-0 kubenswrapper[31456]: I0312 21:26:44.264410 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.274749 master-0 kubenswrapper[31456]: I0312 21:26:44.267431 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-combined-ca-bundle\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.274749 master-0 kubenswrapper[31456]: I0312 21:26:44.273349 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:44.328476 master-0 kubenswrapper[31456]: I0312 21:26:44.328096 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkb4\" (UniqueName: \"kubernetes.io/projected/da04713b-ad0b-4167-8fd7-59bbf482eff1-kube-api-access-dbkb4\") pod \"ironic-6fd7f8b47c-vnhs9\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.581304 master-0 kubenswrapper[31456]: I0312 21:26:44.574366 31456 generic.go:334] "Generic (PLEG): container finished" podID="938fd693-cfad-4dfe-910d-4d5425053d75" containerID="fbe423a38874ec471a4945c741e197952789ea2318f72b7e77874dfe79f6dd8d" exitCode=0 Mar 12 21:26:44.581304 master-0 kubenswrapper[31456]: I0312 21:26:44.574503 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" event={"ID":"938fd693-cfad-4dfe-910d-4d5425053d75","Type":"ContainerDied","Data":"fbe423a38874ec471a4945c741e197952789ea2318f72b7e77874dfe79f6dd8d"} Mar 12 21:26:44.597894 master-0 kubenswrapper[31456]: I0312 21:26:44.585940 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:44.597894 master-0 kubenswrapper[31456]: I0312 21:26:44.590457 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"ae2814de-f43e-4dac-a9bd-54349d25a331","Type":"ContainerStarted","Data":"86d1ce542dc7a3475d59f13d8818a44c0b47c465059f48b8a6c55177fdffe86d"} Mar 12 21:26:44.981853 master-0 kubenswrapper[31456]: I0312 21:26:44.980505 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:45.198749 master-0 kubenswrapper[31456]: I0312 21:26:45.198688 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-svc\") pod \"938fd693-cfad-4dfe-910d-4d5425053d75\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " Mar 12 21:26:45.198972 master-0 kubenswrapper[31456]: I0312 21:26:45.198787 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-config\") pod \"938fd693-cfad-4dfe-910d-4d5425053d75\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " Mar 12 21:26:45.198972 master-0 kubenswrapper[31456]: I0312 21:26:45.198867 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-sb\") pod \"938fd693-cfad-4dfe-910d-4d5425053d75\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " Mar 12 21:26:45.198972 master-0 kubenswrapper[31456]: I0312 21:26:45.198948 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-swift-storage-0\") pod \"938fd693-cfad-4dfe-910d-4d5425053d75\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " Mar 12 21:26:45.199069 master-0 kubenswrapper[31456]: I0312 21:26:45.198987 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-nb\") pod \"938fd693-cfad-4dfe-910d-4d5425053d75\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " Mar 12 21:26:45.199069 master-0 kubenswrapper[31456]: I0312 21:26:45.199035 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzqx5\" (UniqueName: \"kubernetes.io/projected/938fd693-cfad-4dfe-910d-4d5425053d75-kube-api-access-tzqx5\") pod \"938fd693-cfad-4dfe-910d-4d5425053d75\" (UID: \"938fd693-cfad-4dfe-910d-4d5425053d75\") " Mar 12 21:26:45.205926 master-0 kubenswrapper[31456]: I0312 21:26:45.204964 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-e3e1-account-create-update-d66hf"] Mar 12 21:26:45.217034 master-0 kubenswrapper[31456]: I0312 21:26:45.216002 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-68659c9b47-m44wq"] Mar 12 21:26:45.220570 master-0 kubenswrapper[31456]: I0312 21:26:45.220502 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938fd693-cfad-4dfe-910d-4d5425053d75-kube-api-access-tzqx5" (OuterVolumeSpecName: "kube-api-access-tzqx5") pod "938fd693-cfad-4dfe-910d-4d5425053d75" (UID: "938fd693-cfad-4dfe-910d-4d5425053d75"). InnerVolumeSpecName "kube-api-access-tzqx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:45.225273 master-0 kubenswrapper[31456]: I0312 21:26:45.225187 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-jsnft"] Mar 12 21:26:45.272316 master-0 kubenswrapper[31456]: I0312 21:26:45.272258 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c46756b57-z2p86"] Mar 12 21:26:45.284237 master-0 kubenswrapper[31456]: I0312 21:26:45.284175 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "938fd693-cfad-4dfe-910d-4d5425053d75" (UID: "938fd693-cfad-4dfe-910d-4d5425053d75"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:45.300182 master-0 kubenswrapper[31456]: I0312 21:26:45.300117 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "938fd693-cfad-4dfe-910d-4d5425053d75" (UID: "938fd693-cfad-4dfe-910d-4d5425053d75"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:45.300913 master-0 kubenswrapper[31456]: I0312 21:26:45.300862 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-config" (OuterVolumeSpecName: "config") pod "938fd693-cfad-4dfe-910d-4d5425053d75" (UID: "938fd693-cfad-4dfe-910d-4d5425053d75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:45.303573 master-0 kubenswrapper[31456]: I0312 21:26:45.303536 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:45.303755 master-0 kubenswrapper[31456]: I0312 21:26:45.303576 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:45.303755 master-0 kubenswrapper[31456]: I0312 21:26:45.303590 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:45.303755 master-0 kubenswrapper[31456]: I0312 21:26:45.303602 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzqx5\" (UniqueName: \"kubernetes.io/projected/938fd693-cfad-4dfe-910d-4d5425053d75-kube-api-access-tzqx5\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:45.308435 master-0 kubenswrapper[31456]: I0312 21:26:45.308405 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "938fd693-cfad-4dfe-910d-4d5425053d75" (UID: "938fd693-cfad-4dfe-910d-4d5425053d75"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:45.362616 master-0 kubenswrapper[31456]: I0312 21:26:45.361153 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "938fd693-cfad-4dfe-910d-4d5425053d75" (UID: "938fd693-cfad-4dfe-910d-4d5425053d75"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:45.418007 master-0 kubenswrapper[31456]: I0312 21:26:45.417442 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:45.418007 master-0 kubenswrapper[31456]: I0312 21:26:45.417483 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/938fd693-cfad-4dfe-910d-4d5425053d75-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:45.476899 master-0 kubenswrapper[31456]: I0312 21:26:45.475560 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:45.491493 master-0 kubenswrapper[31456]: I0312 21:26:45.490777 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6fd7f8b47c-vnhs9"] Mar 12 21:26:45.510486 master-0 kubenswrapper[31456]: I0312 21:26:45.507428 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 21:26:45.510486 master-0 kubenswrapper[31456]: E0312 21:26:45.508026 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" containerName="dnsmasq-dns" Mar 12 21:26:45.510486 master-0 kubenswrapper[31456]: I0312 21:26:45.508041 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" containerName="dnsmasq-dns" Mar 12 21:26:45.510486 master-0 kubenswrapper[31456]: E0312 21:26:45.508088 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" containerName="init" Mar 12 21:26:45.510486 master-0 kubenswrapper[31456]: I0312 21:26:45.508096 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" containerName="init" Mar 12 21:26:45.510486 master-0 kubenswrapper[31456]: I0312 21:26:45.508391 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" containerName="dnsmasq-dns" Mar 12 21:26:45.518378 master-0 kubenswrapper[31456]: I0312 21:26:45.515706 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 12 21:26:45.519045 master-0 kubenswrapper[31456]: I0312 21:26:45.519009 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:26:45.532240 master-0 kubenswrapper[31456]: I0312 21:26:45.526096 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 12 21:26:45.532240 master-0 kubenswrapper[31456]: I0312 21:26:45.526301 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 12 21:26:45.582056 master-0 kubenswrapper[31456]: I0312 21:26:45.581928 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 21:26:45.647981 master-0 kubenswrapper[31456]: I0312 21:26:45.644085 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" event={"ID":"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae","Type":"ContainerStarted","Data":"e85ae1ed526b685e6e5451b54776776c6e33212ab70c05267fde72f65c7ce10b"} Mar 12 21:26:45.647981 master-0 kubenswrapper[31456]: I0312 21:26:45.646058 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-jsnft" event={"ID":"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1","Type":"ContainerStarted","Data":"01ce2d9b8ce2e894f6a1b5bd7a28133b9eca397878990b027639852ec3595aec"} Mar 12 21:26:45.660818 master-0 kubenswrapper[31456]: I0312 21:26:45.660766 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.660997 master-0 kubenswrapper[31456]: I0312 21:26:45.660950 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/93110548-5710-4149-bd72-8e42693c948e-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.661055 master-0 kubenswrapper[31456]: I0312 21:26:45.661036 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222ww\" (UniqueName: \"kubernetes.io/projected/93110548-5710-4149-bd72-8e42693c948e-kube-api-access-222ww\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.661125 master-0 kubenswrapper[31456]: I0312 21:26:45.661101 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/93110548-5710-4149-bd72-8e42693c948e-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.661164 master-0 kubenswrapper[31456]: I0312 21:26:45.661148 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-scripts\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.661271 master-0 kubenswrapper[31456]: I0312 21:26:45.661248 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-config-data\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.661364 master-0 kubenswrapper[31456]: I0312 21:26:45.661345 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-675ddf08-6034-42a7-9d5b-aadbd19aa3e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9d82d9f8-3b7a-44b1-a4bb-640fedb78418\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.661408 master-0 kubenswrapper[31456]: I0312 21:26:45.661392 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.670325 master-0 kubenswrapper[31456]: I0312 21:26:45.669478 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" event={"ID":"938fd693-cfad-4dfe-910d-4d5425053d75","Type":"ContainerDied","Data":"c1a7d2deee438e52dc6c52919259adc891211c863e11b57cc4a816f3d125f0d0"} Mar 12 21:26:45.670325 master-0 kubenswrapper[31456]: I0312 21:26:45.669532 31456 scope.go:117] "RemoveContainer" containerID="fbe423a38874ec471a4945c741e197952789ea2318f72b7e77874dfe79f6dd8d" Mar 12 21:26:45.670325 master-0 kubenswrapper[31456]: I0312 21:26:45.669664 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669f6b88bf-rkg8p" Mar 12 21:26:45.676029 master-0 kubenswrapper[31456]: I0312 21:26:45.673487 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" event={"ID":"33f0319b-6d84-4282-bbb5-9636e1b62647","Type":"ContainerStarted","Data":"b887657d5fca820f4f1c707b7ee756d6a80426bdea592d6b39c442f30c874ca8"} Mar 12 21:26:45.679954 master-0 kubenswrapper[31456]: I0312 21:26:45.679906 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerStarted","Data":"32d84c0efd3ee96984444b6c7ec0c6dc3cc3e498eb21ba4dcc512c83b39d1d14"} Mar 12 21:26:45.684035 master-0 kubenswrapper[31456]: I0312 21:26:45.683993 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" event={"ID":"c6ae05fd-97f9-4b9b-8067-70ef070e1de7","Type":"ContainerStarted","Data":"f8c02ab5b346d42ab7d34276152741bf5bed8434fa3b2e23ed9105e48750c5f8"} Mar 12 21:26:45.684153 master-0 kubenswrapper[31456]: I0312 21:26:45.684045 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" event={"ID":"c6ae05fd-97f9-4b9b-8067-70ef070e1de7","Type":"ContainerStarted","Data":"e4813c1a9708213c794bffa5f7e6b4f97097234920fc4ef545cf724c16e6d2a4"} Mar 12 21:26:45.708374 master-0 kubenswrapper[31456]: I0312 21:26:45.701375 31456 scope.go:117] "RemoveContainer" containerID="8ea33eca9a4603aa9f6b064070b24c9fbee7c4fd1328df37567b4921c8b51a7e" Mar 12 21:26:45.731224 master-0 kubenswrapper[31456]: I0312 21:26:45.731112 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" podStartSLOduration=2.731088906 podStartE2EDuration="2.731088906s" podCreationTimestamp="2026-03-12 21:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:45.702891122 +0000 UTC m=+1066.777496450" watchObservedRunningTime="2026-03-12 21:26:45.731088906 +0000 UTC m=+1066.805694234" Mar 12 21:26:45.764811 master-0 kubenswrapper[31456]: I0312 21:26:45.764627 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/93110548-5710-4149-bd72-8e42693c948e-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765519 master-0 kubenswrapper[31456]: I0312 21:26:45.764706 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222ww\" (UniqueName: \"kubernetes.io/projected/93110548-5710-4149-bd72-8e42693c948e-kube-api-access-222ww\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765519 master-0 kubenswrapper[31456]: I0312 21:26:45.765342 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/93110548-5710-4149-bd72-8e42693c948e-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765519 master-0 kubenswrapper[31456]: I0312 21:26:45.765378 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-scripts\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765519 master-0 kubenswrapper[31456]: I0312 21:26:45.765439 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-config-data\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765519 master-0 kubenswrapper[31456]: I0312 21:26:45.765501 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-675ddf08-6034-42a7-9d5b-aadbd19aa3e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9d82d9f8-3b7a-44b1-a4bb-640fedb78418\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765791 master-0 kubenswrapper[31456]: I0312 21:26:45.765611 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.765791 master-0 kubenswrapper[31456]: I0312 21:26:45.765707 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.781346 master-0 kubenswrapper[31456]: I0312 21:26:45.780975 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/93110548-5710-4149-bd72-8e42693c948e-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.787902 master-0 kubenswrapper[31456]: I0312 21:26:45.783954 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.787902 master-0 kubenswrapper[31456]: I0312 21:26:45.784900 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-scripts\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.787902 master-0 kubenswrapper[31456]: I0312 21:26:45.785956 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-config-data\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.788621 master-0 kubenswrapper[31456]: I0312 21:26:45.788582 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/93110548-5710-4149-bd72-8e42693c948e-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.788884 master-0 kubenswrapper[31456]: I0312 21:26:45.788838 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:26:45.788944 master-0 kubenswrapper[31456]: I0312 21:26:45.788901 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-675ddf08-6034-42a7-9d5b-aadbd19aa3e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9d82d9f8-3b7a-44b1-a4bb-640fedb78418\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c9f357d8937d74e03bc8c1d3541a577be2383d8e07509deaa2ed381d6124d0bc/globalmount\"" pod="openstack/ironic-conductor-0" Mar 12 21:26:45.812795 master-0 kubenswrapper[31456]: I0312 21:26:45.808556 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93110548-5710-4149-bd72-8e42693c948e-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.839852 master-0 kubenswrapper[31456]: I0312 21:26:45.836504 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222ww\" (UniqueName: \"kubernetes.io/projected/93110548-5710-4149-bd72-8e42693c948e-kube-api-access-222ww\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:45.883875 master-0 kubenswrapper[31456]: I0312 21:26:45.880483 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7df6b6dd9d-tfn65"] Mar 12 21:26:45.883875 master-0 kubenswrapper[31456]: I0312 21:26:45.882676 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.910449 master-0 kubenswrapper[31456]: I0312 21:26:45.910387 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7df6b6dd9d-tfn65"] Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.971926 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-internal-tls-certs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.974225 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-config-data\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.974348 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtcg\" (UniqueName: \"kubernetes.io/projected/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-kube-api-access-zxtcg\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.974395 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-logs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.974509 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-scripts\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.974558 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-combined-ca-bundle\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.975832 master-0 kubenswrapper[31456]: I0312 21:26:45.974611 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-public-tls-certs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:45.976910 master-0 kubenswrapper[31456]: I0312 21:26:45.976740 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-669f6b88bf-rkg8p"] Mar 12 21:26:46.016872 master-0 kubenswrapper[31456]: I0312 21:26:46.003695 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-669f6b88bf-rkg8p"] Mar 12 21:26:46.077243 master-0 kubenswrapper[31456]: I0312 21:26:46.077198 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-internal-tls-certs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.077458 master-0 kubenswrapper[31456]: I0312 21:26:46.077347 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-config-data\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.077458 master-0 kubenswrapper[31456]: I0312 21:26:46.077389 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtcg\" (UniqueName: \"kubernetes.io/projected/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-kube-api-access-zxtcg\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.077458 master-0 kubenswrapper[31456]: I0312 21:26:46.077412 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-logs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.077569 master-0 kubenswrapper[31456]: I0312 21:26:46.077462 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-scripts\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.077569 master-0 kubenswrapper[31456]: I0312 21:26:46.077502 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-combined-ca-bundle\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.077569 master-0 kubenswrapper[31456]: I0312 21:26:46.077534 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-public-tls-certs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.083678 master-0 kubenswrapper[31456]: I0312 21:26:46.083497 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-logs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.085730 master-0 kubenswrapper[31456]: I0312 21:26:46.085508 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-public-tls-certs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.086111 master-0 kubenswrapper[31456]: I0312 21:26:46.086069 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-internal-tls-certs\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.086925 master-0 kubenswrapper[31456]: I0312 21:26:46.086903 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-scripts\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.104607 master-0 kubenswrapper[31456]: I0312 21:26:46.104558 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-config-data\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.104876 master-0 kubenswrapper[31456]: I0312 21:26:46.104702 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtcg\" (UniqueName: \"kubernetes.io/projected/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-kube-api-access-zxtcg\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.105594 master-0 kubenswrapper[31456]: I0312 21:26:46.105554 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f5ee7a-2895-4ca8-b99b-c0b6c8699050-combined-ca-bundle\") pod \"placement-7df6b6dd9d-tfn65\" (UID: \"04f5ee7a-2895-4ca8-b99b-c0b6c8699050\") " pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.191095 master-0 kubenswrapper[31456]: I0312 21:26:46.176858 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:46.191095 master-0 kubenswrapper[31456]: I0312 21:26:46.181887 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:46.228998 master-0 kubenswrapper[31456]: I0312 21:26:46.228922 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:46.259931 master-0 kubenswrapper[31456]: I0312 21:26:46.257913 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:46.263500 master-0 kubenswrapper[31456]: I0312 21:26:46.260871 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:46.295812 master-0 kubenswrapper[31456]: I0312 21:26:46.294088 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:46.329792 master-0 kubenswrapper[31456]: I0312 21:26:46.329728 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:46.699128 master-0 kubenswrapper[31456]: I0312 21:26:46.698984 31456 generic.go:334] "Generic (PLEG): container finished" podID="cc0e046b-34a2-4a0f-a4e6-87aad153b7a1" containerID="4defcd3fd5f7b93764da52cd4bee54a841f27ee6fafc8396e44b153b39041892" exitCode=0 Mar 12 21:26:46.699598 master-0 kubenswrapper[31456]: I0312 21:26:46.699086 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-jsnft" event={"ID":"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1","Type":"ContainerDied","Data":"4defcd3fd5f7b93764da52cd4bee54a841f27ee6fafc8396e44b153b39041892"} Mar 12 21:26:46.706783 master-0 kubenswrapper[31456]: I0312 21:26:46.704006 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-api-0" event={"ID":"ae2814de-f43e-4dac-a9bd-54349d25a331","Type":"ContainerStarted","Data":"2c2c82a77a244220a5f75621fb75c1b7d2fb4284531cf6b5301cd510a3fa5b28"} Mar 12 21:26:46.706783 master-0 kubenswrapper[31456]: I0312 21:26:46.704185 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:46.706783 master-0 kubenswrapper[31456]: I0312 21:26:46.705796 31456 generic.go:334] "Generic (PLEG): container finished" podID="c6ae05fd-97f9-4b9b-8067-70ef070e1de7" containerID="f8c02ab5b346d42ab7d34276152741bf5bed8434fa3b2e23ed9105e48750c5f8" exitCode=0 Mar 12 21:26:46.706783 master-0 kubenswrapper[31456]: I0312 21:26:46.705912 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" event={"ID":"c6ae05fd-97f9-4b9b-8067-70ef070e1de7","Type":"ContainerDied","Data":"f8c02ab5b346d42ab7d34276152741bf5bed8434fa3b2e23ed9105e48750c5f8"} Mar 12 21:26:46.708681 master-0 kubenswrapper[31456]: I0312 21:26:46.708611 31456 generic.go:334] "Generic (PLEG): container finished" podID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerID="e3c575ccab1a93d6beb8416023aed45152836bf62bb3f42180c73f5efca884c2" exitCode=0 Mar 12 21:26:46.708681 master-0 kubenswrapper[31456]: I0312 21:26:46.708647 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" event={"ID":"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae","Type":"ContainerDied","Data":"e3c575ccab1a93d6beb8416023aed45152836bf62bb3f42180c73f5efca884c2"} Mar 12 21:26:46.708859 master-0 kubenswrapper[31456]: I0312 21:26:46.708813 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-scheduler-0" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="cinder-scheduler" containerID="cri-o://75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7" gracePeriod=30 Mar 12 21:26:46.709000 master-0 kubenswrapper[31456]: I0312 21:26:46.708922 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-scheduler-0" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="probe" containerID="cri-o://44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289" gracePeriod=30 Mar 12 21:26:46.710549 master-0 kubenswrapper[31456]: I0312 21:26:46.709042 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-backup-0" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="cinder-backup" containerID="cri-o://c57024656546ec8e36c2613e9b153874dade0ea43e1d084b92484464205d1a1b" gracePeriod=30 Mar 12 21:26:46.710549 master-0 kubenswrapper[31456]: I0312 21:26:46.709190 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-backup-0" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="probe" containerID="cri-o://99e1a7f7eb742af34c9dc5d5601c8e98d7b3792e2ab3e49ce401e0f211575ebe" gracePeriod=30 Mar 12 21:26:46.710549 master-0 kubenswrapper[31456]: I0312 21:26:46.709277 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="cinder-volume" containerID="cri-o://3e73dd87325fd97b92f555444b1fbf4163313351b2fc93de5220677674539714" gracePeriod=30 Mar 12 21:26:46.710549 master-0 kubenswrapper[31456]: I0312 21:26:46.709299 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="probe" containerID="cri-o://787e2b678b94d4b263056b4730a580a0edfede9ddde73ec39dd298914e699c9d" gracePeriod=30 Mar 12 21:26:46.802040 master-0 kubenswrapper[31456]: I0312 21:26:46.793250 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-api-0" podStartSLOduration=5.793226922 podStartE2EDuration="5.793226922s" podCreationTimestamp="2026-03-12 21:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:46.777232905 +0000 UTC m=+1067.851838253" watchObservedRunningTime="2026-03-12 21:26:46.793226922 +0000 UTC m=+1067.867832250" Mar 12 21:26:46.896743 master-0 kubenswrapper[31456]: I0312 21:26:46.895329 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7df6b6dd9d-tfn65"] Mar 12 21:26:47.121772 master-0 kubenswrapper[31456]: I0312 21:26:47.121557 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-b47877c79-c5fvh"] Mar 12 21:26:47.130449 master-0 kubenswrapper[31456]: I0312 21:26:47.129410 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.132264 master-0 kubenswrapper[31456]: I0312 21:26:47.132094 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 12 21:26:47.133916 master-0 kubenswrapper[31456]: I0312 21:26:47.132968 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 12 21:26:47.207979 master-0 kubenswrapper[31456]: I0312 21:26:47.206784 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938fd693-cfad-4dfe-910d-4d5425053d75" path="/var/lib/kubelet/pods/938fd693-cfad-4dfe-910d-4d5425053d75/volumes" Mar 12 21:26:47.207979 master-0 kubenswrapper[31456]: I0312 21:26:47.207478 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b47877c79-c5fvh"] Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.249706 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data-custom\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.249826 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d868331-ae79-4015-8f7b-c0aed1d33312-logs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250074 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data-merged\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250232 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6d868331-ae79-4015-8f7b-c0aed1d33312-etc-podinfo\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250343 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-combined-ca-bundle\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250367 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-scripts\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250383 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250484 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvkf\" (UniqueName: \"kubernetes.io/projected/6d868331-ae79-4015-8f7b-c0aed1d33312-kube-api-access-2wvkf\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250527 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-public-tls-certs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.251351 master-0 kubenswrapper[31456]: I0312 21:26:47.250557 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-internal-tls-certs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.303940 master-0 kubenswrapper[31456]: I0312 21:26:47.297845 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-675ddf08-6034-42a7-9d5b-aadbd19aa3e1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9d82d9f8-3b7a-44b1-a4bb-640fedb78418\") pod \"ironic-conductor-0\" (UID: \"93110548-5710-4149-bd72-8e42693c948e\") " pod="openstack/ironic-conductor-0" Mar 12 21:26:47.357637 master-0 kubenswrapper[31456]: I0312 21:26:47.357574 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data-custom\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358006 master-0 kubenswrapper[31456]: I0312 21:26:47.357947 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d868331-ae79-4015-8f7b-c0aed1d33312-logs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358213 master-0 kubenswrapper[31456]: I0312 21:26:47.358188 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data-merged\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358344 master-0 kubenswrapper[31456]: I0312 21:26:47.358324 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6d868331-ae79-4015-8f7b-c0aed1d33312-etc-podinfo\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358454 master-0 kubenswrapper[31456]: I0312 21:26:47.358425 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-combined-ca-bundle\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358454 master-0 kubenswrapper[31456]: I0312 21:26:47.358451 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-scripts\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358528 master-0 kubenswrapper[31456]: I0312 21:26:47.358466 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358583 master-0 kubenswrapper[31456]: I0312 21:26:47.358564 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvkf\" (UniqueName: \"kubernetes.io/projected/6d868331-ae79-4015-8f7b-c0aed1d33312-kube-api-access-2wvkf\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358623 master-0 kubenswrapper[31456]: I0312 21:26:47.358615 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-public-tls-certs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358655 master-0 kubenswrapper[31456]: I0312 21:26:47.358646 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-internal-tls-certs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.358906 master-0 kubenswrapper[31456]: I0312 21:26:47.358841 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d868331-ae79-4015-8f7b-c0aed1d33312-logs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.359209 master-0 kubenswrapper[31456]: I0312 21:26:47.359183 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data-merged\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.362024 master-0 kubenswrapper[31456]: I0312 21:26:47.361988 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-combined-ca-bundle\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.362414 master-0 kubenswrapper[31456]: I0312 21:26:47.362365 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6d868331-ae79-4015-8f7b-c0aed1d33312-etc-podinfo\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.363388 master-0 kubenswrapper[31456]: I0312 21:26:47.363347 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-internal-tls-certs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.363707 master-0 kubenswrapper[31456]: I0312 21:26:47.363685 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.363855 master-0 kubenswrapper[31456]: I0312 21:26:47.363788 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-scripts\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.365356 master-0 kubenswrapper[31456]: I0312 21:26:47.365321 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-config-data-custom\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.367427 master-0 kubenswrapper[31456]: I0312 21:26:47.367379 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d868331-ae79-4015-8f7b-c0aed1d33312-public-tls-certs\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.378131 master-0 kubenswrapper[31456]: I0312 21:26:47.378074 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvkf\" (UniqueName: \"kubernetes.io/projected/6d868331-ae79-4015-8f7b-c0aed1d33312-kube-api-access-2wvkf\") pod \"ironic-b47877c79-c5fvh\" (UID: \"6d868331-ae79-4015-8f7b-c0aed1d33312\") " pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.412438 master-0 kubenswrapper[31456]: I0312 21:26:47.412110 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 12 21:26:47.459895 master-0 kubenswrapper[31456]: I0312 21:26:47.459821 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:47.563793 master-0 kubenswrapper[31456]: W0312 21:26:47.563730 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04f5ee7a_2895_4ca8_b99b_c0b6c8699050.slice/crio-9e8535663f7daa348a3727ca772e161a7b12aea8c4d156e2de3eb62f167313a5 WatchSource:0}: Error finding container 9e8535663f7daa348a3727ca772e161a7b12aea8c4d156e2de3eb62f167313a5: Status 404 returned error can't find the container with id 9e8535663f7daa348a3727ca772e161a7b12aea8c4d156e2de3eb62f167313a5 Mar 12 21:26:47.738481 master-0 kubenswrapper[31456]: I0312 21:26:47.738428 31456 generic.go:334] "Generic (PLEG): container finished" podID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerID="44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289" exitCode=0 Mar 12 21:26:47.739013 master-0 kubenswrapper[31456]: I0312 21:26:47.738497 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"8a2f5eb4-3eff-4449-829b-2701ab9b6965","Type":"ContainerDied","Data":"44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289"} Mar 12 21:26:47.739889 master-0 kubenswrapper[31456]: I0312 21:26:47.739858 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" event={"ID":"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae","Type":"ContainerStarted","Data":"9789fac5fae792aebde470636b7a48ac38828a65457cc09dab808d0326628d9a"} Mar 12 21:26:47.740281 master-0 kubenswrapper[31456]: I0312 21:26:47.740247 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:47.744685 master-0 kubenswrapper[31456]: I0312 21:26:47.744635 31456 generic.go:334] "Generic (PLEG): container finished" podID="30465684-0661-4306-8903-d8aa99f95fd7" containerID="99e1a7f7eb742af34c9dc5d5601c8e98d7b3792e2ab3e49ce401e0f211575ebe" exitCode=0 Mar 12 21:26:47.744685 master-0 kubenswrapper[31456]: I0312 21:26:47.744672 31456 generic.go:334] "Generic (PLEG): container finished" podID="30465684-0661-4306-8903-d8aa99f95fd7" containerID="c57024656546ec8e36c2613e9b153874dade0ea43e1d084b92484464205d1a1b" exitCode=0 Mar 12 21:26:47.744872 master-0 kubenswrapper[31456]: I0312 21:26:47.744714 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"30465684-0661-4306-8903-d8aa99f95fd7","Type":"ContainerDied","Data":"99e1a7f7eb742af34c9dc5d5601c8e98d7b3792e2ab3e49ce401e0f211575ebe"} Mar 12 21:26:47.744872 master-0 kubenswrapper[31456]: I0312 21:26:47.744739 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"30465684-0661-4306-8903-d8aa99f95fd7","Type":"ContainerDied","Data":"c57024656546ec8e36c2613e9b153874dade0ea43e1d084b92484464205d1a1b"} Mar 12 21:26:47.746424 master-0 kubenswrapper[31456]: I0312 21:26:47.746385 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7df6b6dd9d-tfn65" event={"ID":"04f5ee7a-2895-4ca8-b99b-c0b6c8699050","Type":"ContainerStarted","Data":"9e8535663f7daa348a3727ca772e161a7b12aea8c4d156e2de3eb62f167313a5"} Mar 12 21:26:47.748380 master-0 kubenswrapper[31456]: I0312 21:26:47.748344 31456 generic.go:334] "Generic (PLEG): container finished" podID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerID="787e2b678b94d4b263056b4730a580a0edfede9ddde73ec39dd298914e699c9d" exitCode=0 Mar 12 21:26:47.748380 master-0 kubenswrapper[31456]: I0312 21:26:47.748367 31456 generic.go:334] "Generic (PLEG): container finished" podID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerID="3e73dd87325fd97b92f555444b1fbf4163313351b2fc93de5220677674539714" exitCode=0 Mar 12 21:26:47.751984 master-0 kubenswrapper[31456]: I0312 21:26:47.751699 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"87e93241-daea-4fbc-b947-8edb8b8ea521","Type":"ContainerDied","Data":"787e2b678b94d4b263056b4730a580a0edfede9ddde73ec39dd298914e699c9d"} Mar 12 21:26:47.751984 master-0 kubenswrapper[31456]: I0312 21:26:47.751750 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"87e93241-daea-4fbc-b947-8edb8b8ea521","Type":"ContainerDied","Data":"3e73dd87325fd97b92f555444b1fbf4163313351b2fc93de5220677674539714"} Mar 12 21:26:47.771631 master-0 kubenswrapper[31456]: I0312 21:26:47.771553 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" podStartSLOduration=4.771527 podStartE2EDuration="4.771527s" podCreationTimestamp="2026-03-12 21:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:47.759058688 +0000 UTC m=+1068.833664036" watchObservedRunningTime="2026-03-12 21:26:47.771527 +0000 UTC m=+1068.846132328" Mar 12 21:26:48.762024 master-0 kubenswrapper[31456]: I0312 21:26:48.761922 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-jsnft" event={"ID":"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1","Type":"ContainerDied","Data":"01ce2d9b8ce2e894f6a1b5bd7a28133b9eca397878990b027639852ec3595aec"} Mar 12 21:26:48.762024 master-0 kubenswrapper[31456]: I0312 21:26:48.761974 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ce2d9b8ce2e894f6a1b5bd7a28133b9eca397878990b027639852ec3595aec" Mar 12 21:26:48.764302 master-0 kubenswrapper[31456]: I0312 21:26:48.764241 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" event={"ID":"c6ae05fd-97f9-4b9b-8067-70ef070e1de7","Type":"ContainerDied","Data":"e4813c1a9708213c794bffa5f7e6b4f97097234920fc4ef545cf724c16e6d2a4"} Mar 12 21:26:48.764302 master-0 kubenswrapper[31456]: I0312 21:26:48.764262 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4813c1a9708213c794bffa5f7e6b4f97097234920fc4ef545cf724c16e6d2a4" Mar 12 21:26:48.771748 master-0 kubenswrapper[31456]: I0312 21:26:48.770316 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"30465684-0661-4306-8903-d8aa99f95fd7","Type":"ContainerDied","Data":"3e15259a2dd1c79bed7d4844947ed4242703f3ed22a6415878015704b0e3287d"} Mar 12 21:26:48.771748 master-0 kubenswrapper[31456]: I0312 21:26:48.770352 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e15259a2dd1c79bed7d4844947ed4242703f3ed22a6415878015704b0e3287d" Mar 12 21:26:48.980593 master-0 kubenswrapper[31456]: I0312 21:26:48.980555 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:49.066769 master-0 kubenswrapper[31456]: I0312 21:26:49.062785 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:49.080491 master-0 kubenswrapper[31456]: I0312 21:26:49.080413 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:49.095885 master-0 kubenswrapper[31456]: I0312 21:26:49.095779 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.108913 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-machine-id\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109027 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-nvme\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109078 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-scripts\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109110 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-operator-scripts\") pod \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109136 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-dev\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109159 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-brick\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109199 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109281 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvvx\" (UniqueName: \"kubernetes.io/projected/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-kube-api-access-xlvvx\") pod \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\" (UID: \"c6ae05fd-97f9-4b9b-8067-70ef070e1de7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109310 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-run\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109326 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-cinder\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109371 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-iscsi\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.109400 master-0 kubenswrapper[31456]: I0312 21:26:49.109406 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-combined-ca-bundle\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.110384 master-0 kubenswrapper[31456]: I0312 21:26:49.109438 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-lib-modules\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.110384 master-0 kubenswrapper[31456]: I0312 21:26:49.109463 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-sys\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.110384 master-0 kubenswrapper[31456]: I0312 21:26:49.109497 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68vtz\" (UniqueName: \"kubernetes.io/projected/30465684-0661-4306-8903-d8aa99f95fd7-kube-api-access-68vtz\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.110384 master-0 kubenswrapper[31456]: I0312 21:26:49.109513 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-lib-cinder\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.110384 master-0 kubenswrapper[31456]: I0312 21:26:49.109538 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data-custom\") pod \"30465684-0661-4306-8903-d8aa99f95fd7\" (UID: \"30465684-0661-4306-8903-d8aa99f95fd7\") " Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.115684 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.115747 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.115776 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.115796 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-dev" (OuterVolumeSpecName: "dev") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116038 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116127 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-run" (OuterVolumeSpecName: "run") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116188 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-sys" (OuterVolumeSpecName: "sys") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116227 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116654 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6ae05fd-97f9-4b9b-8067-70ef070e1de7" (UID: "c6ae05fd-97f9-4b9b-8067-70ef070e1de7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116742 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.116978 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.117726 master-0 kubenswrapper[31456]: I0312 21:26:49.117252 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-kube-api-access-xlvvx" (OuterVolumeSpecName: "kube-api-access-xlvvx") pod "c6ae05fd-97f9-4b9b-8067-70ef070e1de7" (UID: "c6ae05fd-97f9-4b9b-8067-70ef070e1de7"). InnerVolumeSpecName "kube-api-access-xlvvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:49.125489 master-0 kubenswrapper[31456]: I0312 21:26:49.125443 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.127332 master-0 kubenswrapper[31456]: I0312 21:26:49.127234 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30465684-0661-4306-8903-d8aa99f95fd7-kube-api-access-68vtz" (OuterVolumeSpecName: "kube-api-access-68vtz") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "kube-api-access-68vtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:49.132620 master-0 kubenswrapper[31456]: I0312 21:26:49.132521 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-scripts" (OuterVolumeSpecName: "scripts") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.218325 master-0 kubenswrapper[31456]: I0312 21:26:49.218281 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.218590 master-0 kubenswrapper[31456]: I0312 21:26:49.218573 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data-custom\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.218682 master-0 kubenswrapper[31456]: I0312 21:26:49.218670 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-lib-cinder\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.219139 master-0 kubenswrapper[31456]: I0312 21:26:49.219125 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-run\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.219379 master-0 kubenswrapper[31456]: I0312 21:26:49.219236 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-lib-modules\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.220559 master-0 kubenswrapper[31456]: I0312 21:26:49.220520 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-nvme\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.220736 master-0 kubenswrapper[31456]: I0312 21:26:49.220699 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hrv\" (UniqueName: \"kubernetes.io/projected/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-kube-api-access-94hrv\") pod \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " Mar 12 21:26:49.220904 master-0 kubenswrapper[31456]: I0312 21:26:49.220864 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-machine-id\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.221114 master-0 kubenswrapper[31456]: I0312 21:26:49.221094 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-brick\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.221391 master-0 kubenswrapper[31456]: I0312 21:26:49.221354 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-sys\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.221792 master-0 kubenswrapper[31456]: I0312 21:26:49.221763 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-operator-scripts\") pod \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\" (UID: \"cc0e046b-34a2-4a0f-a4e6-87aad153b7a1\") " Mar 12 21:26:49.221988 master-0 kubenswrapper[31456]: I0312 21:26:49.221945 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.222070 master-0 kubenswrapper[31456]: I0312 21:26:49.222007 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.222070 master-0 kubenswrapper[31456]: I0312 21:26:49.222032 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-run" (OuterVolumeSpecName: "run") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.222070 master-0 kubenswrapper[31456]: I0312 21:26:49.222062 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.222219 master-0 kubenswrapper[31456]: I0312 21:26:49.222083 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.222386 master-0 kubenswrapper[31456]: I0312 21:26:49.222269 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-dev\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.222505 master-0 kubenswrapper[31456]: I0312 21:26:49.222491 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-combined-ca-bundle\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.222760 master-0 kubenswrapper[31456]: I0312 21:26:49.222741 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6blsv\" (UniqueName: \"kubernetes.io/projected/87e93241-daea-4fbc-b947-8edb8b8ea521-kube-api-access-6blsv\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.223023 master-0 kubenswrapper[31456]: I0312 21:26:49.223004 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-iscsi\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.223567 master-0 kubenswrapper[31456]: I0312 21:26:49.223546 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-cinder\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.224734 master-0 kubenswrapper[31456]: I0312 21:26:49.224715 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-scripts\") pod \"87e93241-daea-4fbc-b947-8edb8b8ea521\" (UID: \"87e93241-daea-4fbc-b947-8edb8b8ea521\") " Mar 12 21:26:49.231586 master-0 kubenswrapper[31456]: I0312 21:26:49.231536 31456 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.231849 master-0 kubenswrapper[31456]: I0312 21:26:49.231836 31456 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.231926 master-0 kubenswrapper[31456]: I0312 21:26:49.231915 31456 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.231990 master-0 kubenswrapper[31456]: I0312 21:26:49.231980 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232107 master-0 kubenswrapper[31456]: I0312 21:26:49.232047 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232260 master-0 kubenswrapper[31456]: I0312 21:26:49.232244 31456 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-dev\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232348 master-0 kubenswrapper[31456]: I0312 21:26:49.232334 31456 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232434 master-0 kubenswrapper[31456]: I0312 21:26:49.232418 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlvvx\" (UniqueName: \"kubernetes.io/projected/c6ae05fd-97f9-4b9b-8067-70ef070e1de7-kube-api-access-xlvvx\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232542 master-0 kubenswrapper[31456]: I0312 21:26:49.232527 31456 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232621 master-0 kubenswrapper[31456]: I0312 21:26:49.232607 31456 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232739 master-0 kubenswrapper[31456]: I0312 21:26:49.232686 31456 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232849 master-0 kubenswrapper[31456]: I0312 21:26:49.232835 31456 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.232939 master-0 kubenswrapper[31456]: I0312 21:26:49.232927 31456 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-sys\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.233010 master-0 kubenswrapper[31456]: I0312 21:26:49.232998 31456 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.233091 master-0 kubenswrapper[31456]: I0312 21:26:49.233079 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68vtz\" (UniqueName: \"kubernetes.io/projected/30465684-0661-4306-8903-d8aa99f95fd7-kube-api-access-68vtz\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.233311 master-0 kubenswrapper[31456]: I0312 21:26:49.223204 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-dev" (OuterVolumeSpecName: "dev") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.233393 master-0 kubenswrapper[31456]: I0312 21:26:49.223241 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-sys" (OuterVolumeSpecName: "sys") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.233393 master-0 kubenswrapper[31456]: I0312 21:26:49.230027 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.233393 master-0 kubenswrapper[31456]: I0312 21:26:49.230089 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.233505 master-0 kubenswrapper[31456]: I0312 21:26:49.233428 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.234246 master-0 kubenswrapper[31456]: I0312 21:26:49.234121 31456 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/30465684-0661-4306-8903-d8aa99f95fd7-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.234246 master-0 kubenswrapper[31456]: I0312 21:26:49.234243 31456 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.234391 master-0 kubenswrapper[31456]: I0312 21:26:49.234274 31456 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.234391 master-0 kubenswrapper[31456]: I0312 21:26:49.234287 31456 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.234391 master-0 kubenswrapper[31456]: I0312 21:26:49.234322 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.256181 master-0 kubenswrapper[31456]: I0312 21:26:49.256102 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc0e046b-34a2-4a0f-a4e6-87aad153b7a1" (UID: "cc0e046b-34a2-4a0f-a4e6-87aad153b7a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:49.298290 master-0 kubenswrapper[31456]: I0312 21:26:49.298242 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e93241-daea-4fbc-b947-8edb8b8ea521-kube-api-access-6blsv" (OuterVolumeSpecName: "kube-api-access-6blsv") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "kube-api-access-6blsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:49.298417 master-0 kubenswrapper[31456]: I0312 21:26:49.298342 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.298462 master-0 kubenswrapper[31456]: I0312 21:26:49.298250 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-scripts" (OuterVolumeSpecName: "scripts") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.298599 master-0 kubenswrapper[31456]: I0312 21:26:49.298523 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-kube-api-access-94hrv" (OuterVolumeSpecName: "kube-api-access-94hrv") pod "cc0e046b-34a2-4a0f-a4e6-87aad153b7a1" (UID: "cc0e046b-34a2-4a0f-a4e6-87aad153b7a1"). InnerVolumeSpecName "kube-api-access-94hrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:49.353253 master-0 kubenswrapper[31456]: I0312 21:26:49.353190 31456 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353253 master-0 kubenswrapper[31456]: I0312 21:26:49.353233 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353253 master-0 kubenswrapper[31456]: I0312 21:26:49.353243 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353253 master-0 kubenswrapper[31456]: I0312 21:26:49.353254 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hrv\" (UniqueName: \"kubernetes.io/projected/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-kube-api-access-94hrv\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353253 master-0 kubenswrapper[31456]: I0312 21:26:49.353265 31456 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353253 master-0 kubenswrapper[31456]: I0312 21:26:49.353275 31456 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-sys\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353680 master-0 kubenswrapper[31456]: I0312 21:26:49.353285 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0e046b-34a2-4a0f-a4e6-87aad153b7a1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353680 master-0 kubenswrapper[31456]: I0312 21:26:49.353296 31456 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-dev\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353680 master-0 kubenswrapper[31456]: I0312 21:26:49.353304 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6blsv\" (UniqueName: \"kubernetes.io/projected/87e93241-daea-4fbc-b947-8edb8b8ea521-kube-api-access-6blsv\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.353680 master-0 kubenswrapper[31456]: I0312 21:26:49.353313 31456 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87e93241-daea-4fbc-b947-8edb8b8ea521-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.406841 master-0 kubenswrapper[31456]: I0312 21:26:49.406775 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.437149 master-0 kubenswrapper[31456]: I0312 21:26:49.437088 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.443411 master-0 kubenswrapper[31456]: I0312 21:26:49.443377 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:49.469371 master-0 kubenswrapper[31456]: I0312 21:26:49.456859 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.469371 master-0 kubenswrapper[31456]: I0312 21:26:49.456897 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.480504 master-0 kubenswrapper[31456]: I0312 21:26:49.477173 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data" (OuterVolumeSpecName: "config-data") pod "30465684-0661-4306-8903-d8aa99f95fd7" (UID: "30465684-0661-4306-8903-d8aa99f95fd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.528447 master-0 kubenswrapper[31456]: I0312 21:26:49.528408 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b47877c79-c5fvh"] Mar 12 21:26:49.557988 master-0 kubenswrapper[31456]: I0312 21:26:49.557897 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.557988 master-0 kubenswrapper[31456]: I0312 21:26:49.557941 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data-custom\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.558141 master-0 kubenswrapper[31456]: I0312 21:26:49.558056 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-scripts\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.558141 master-0 kubenswrapper[31456]: I0312 21:26:49.558104 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcm2w\" (UniqueName: \"kubernetes.io/projected/8a2f5eb4-3eff-4449-829b-2701ab9b6965-kube-api-access-qcm2w\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.558212 master-0 kubenswrapper[31456]: I0312 21:26:49.558155 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.558212 master-0 kubenswrapper[31456]: I0312 21:26:49.558173 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a2f5eb4-3eff-4449-829b-2701ab9b6965-etc-machine-id\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.558701 master-0 kubenswrapper[31456]: I0312 21:26:49.558677 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30465684-0661-4306-8903-d8aa99f95fd7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.558763 master-0 kubenswrapper[31456]: I0312 21:26:49.558719 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a2f5eb4-3eff-4449-829b-2701ab9b6965-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 21:26:49.558848 master-0 kubenswrapper[31456]: I0312 21:26:49.558687 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 12 21:26:49.571614 master-0 kubenswrapper[31456]: I0312 21:26:49.570391 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2f5eb4-3eff-4449-829b-2701ab9b6965-kube-api-access-qcm2w" (OuterVolumeSpecName: "kube-api-access-qcm2w") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "kube-api-access-qcm2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:49.571614 master-0 kubenswrapper[31456]: I0312 21:26:49.570507 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-scripts" (OuterVolumeSpecName: "scripts") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.571614 master-0 kubenswrapper[31456]: I0312 21:26:49.570761 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.626364 master-0 kubenswrapper[31456]: I0312 21:26:49.626300 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data" (OuterVolumeSpecName: "config-data") pod "87e93241-daea-4fbc-b947-8edb8b8ea521" (UID: "87e93241-daea-4fbc-b947-8edb8b8ea521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.659507 master-0 kubenswrapper[31456]: I0312 21:26:49.659406 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.660583 master-0 kubenswrapper[31456]: I0312 21:26:49.660466 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle\") pod \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\" (UID: \"8a2f5eb4-3eff-4449-829b-2701ab9b6965\") " Mar 12 21:26:49.661497 master-0 kubenswrapper[31456]: W0312 21:26:49.660765 31456 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8a2f5eb4-3eff-4449-829b-2701ab9b6965/volumes/kubernetes.io~secret/combined-ca-bundle Mar 12 21:26:49.661497 master-0 kubenswrapper[31456]: I0312 21:26:49.660799 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.662611 master-0 kubenswrapper[31456]: I0312 21:26:49.662537 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcm2w\" (UniqueName: \"kubernetes.io/projected/8a2f5eb4-3eff-4449-829b-2701ab9b6965-kube-api-access-qcm2w\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.662611 master-0 kubenswrapper[31456]: I0312 21:26:49.662559 31456 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a2f5eb4-3eff-4449-829b-2701ab9b6965-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.662611 master-0 kubenswrapper[31456]: I0312 21:26:49.662568 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.662611 master-0 kubenswrapper[31456]: I0312 21:26:49.662578 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.662611 master-0 kubenswrapper[31456]: I0312 21:26:49.662587 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e93241-daea-4fbc-b947-8edb8b8ea521-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.662611 master-0 kubenswrapper[31456]: I0312 21:26:49.662595 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.755854 master-0 kubenswrapper[31456]: I0312 21:26:49.750991 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data" (OuterVolumeSpecName: "config-data") pod "8a2f5eb4-3eff-4449-829b-2701ab9b6965" (UID: "8a2f5eb4-3eff-4449-829b-2701ab9b6965"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:26:49.770171 master-0 kubenswrapper[31456]: I0312 21:26:49.770118 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2f5eb4-3eff-4449-829b-2701ab9b6965-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:49.788164 master-0 kubenswrapper[31456]: I0312 21:26:49.788115 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerStarted","Data":"15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae"} Mar 12 21:26:49.790726 master-0 kubenswrapper[31456]: I0312 21:26:49.790690 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b47877c79-c5fvh" event={"ID":"6d868331-ae79-4015-8f7b-c0aed1d33312","Type":"ContainerStarted","Data":"46a19d1e2f70927fd9a99ec691151657a8c7d9f247b922229d0b2d4580ad2ce9"} Mar 12 21:26:49.800954 master-0 kubenswrapper[31456]: I0312 21:26:49.800048 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"f7e954db91aaf89417fa1c97733bcc9f7c12ed17e3983be01d9233277d43787c"} Mar 12 21:26:49.804266 master-0 kubenswrapper[31456]: I0312 21:26:49.804225 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7df6b6dd9d-tfn65" event={"ID":"04f5ee7a-2895-4ca8-b99b-c0b6c8699050","Type":"ContainerStarted","Data":"4f6886cf3c238e53bd07dca375693413b12bd7926e702098522323a4a78620be"} Mar 12 21:26:49.804355 master-0 kubenswrapper[31456]: I0312 21:26:49.804269 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7df6b6dd9d-tfn65" event={"ID":"04f5ee7a-2895-4ca8-b99b-c0b6c8699050","Type":"ContainerStarted","Data":"c919ee033af24d75e125ec0578c0f73de9b059cabbe4872294d2aad2a22bc19a"} Mar 12 21:26:49.804560 master-0 kubenswrapper[31456]: I0312 21:26:49.804510 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:49.804674 master-0 kubenswrapper[31456]: I0312 21:26:49.804654 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:26:49.807497 master-0 kubenswrapper[31456]: I0312 21:26:49.807457 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"87e93241-daea-4fbc-b947-8edb8b8ea521","Type":"ContainerDied","Data":"c1ca6c5970ca499d66e05145b24948a61f36ce64b26eeccd04f0e09c94ffe0ad"} Mar 12 21:26:49.807584 master-0 kubenswrapper[31456]: I0312 21:26:49.807517 31456 scope.go:117] "RemoveContainer" containerID="787e2b678b94d4b263056b4730a580a0edfede9ddde73ec39dd298914e699c9d" Mar 12 21:26:49.808034 master-0 kubenswrapper[31456]: I0312 21:26:49.807657 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:49.817097 master-0 kubenswrapper[31456]: I0312 21:26:49.817044 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" event={"ID":"33f0319b-6d84-4282-bbb5-9636e1b62647","Type":"ContainerStarted","Data":"a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093"} Mar 12 21:26:49.818087 master-0 kubenswrapper[31456]: I0312 21:26:49.818066 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:49.828460 master-0 kubenswrapper[31456]: I0312 21:26:49.828399 31456 generic.go:334] "Generic (PLEG): container finished" podID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerID="75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7" exitCode=0 Mar 12 21:26:49.829052 master-0 kubenswrapper[31456]: I0312 21:26:49.828518 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-jsnft" Mar 12 21:26:49.829410 master-0 kubenswrapper[31456]: I0312 21:26:49.829374 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:49.838951 master-0 kubenswrapper[31456]: I0312 21:26:49.838208 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:49.846602 master-0 kubenswrapper[31456]: I0312 21:26:49.843647 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e3e1-account-create-update-d66hf" Mar 12 21:26:49.846602 master-0 kubenswrapper[31456]: I0312 21:26:49.843643 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"8a2f5eb4-3eff-4449-829b-2701ab9b6965","Type":"ContainerDied","Data":"75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7"} Mar 12 21:26:49.846602 master-0 kubenswrapper[31456]: I0312 21:26:49.844219 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"8a2f5eb4-3eff-4449-829b-2701ab9b6965","Type":"ContainerDied","Data":"1fb8f1924050d6fac66adf075d11044e428802cf9d6f8f8f393b6b1d908d1fa7"} Mar 12 21:26:49.879526 master-0 kubenswrapper[31456]: I0312 21:26:49.879423 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7df6b6dd9d-tfn65" podStartSLOduration=4.879398536 podStartE2EDuration="4.879398536s" podCreationTimestamp="2026-03-12 21:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:49.843690423 +0000 UTC m=+1070.918295751" watchObservedRunningTime="2026-03-12 21:26:49.879398536 +0000 UTC m=+1070.954003864" Mar 12 21:26:49.897869 master-0 kubenswrapper[31456]: I0312 21:26:49.897707 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podStartSLOduration=3.381500667 podStartE2EDuration="6.897689099s" podCreationTimestamp="2026-03-12 21:26:43 +0000 UTC" firstStartedPulling="2026-03-12 21:26:45.113668852 +0000 UTC m=+1066.188274180" lastFinishedPulling="2026-03-12 21:26:48.629857264 +0000 UTC m=+1069.704462612" observedRunningTime="2026-03-12 21:26:49.888018415 +0000 UTC m=+1070.962623743" watchObservedRunningTime="2026-03-12 21:26:49.897689099 +0000 UTC m=+1070.972294427" Mar 12 21:26:49.991715 master-0 kubenswrapper[31456]: I0312 21:26:49.990998 31456 scope.go:117] "RemoveContainer" containerID="3e73dd87325fd97b92f555444b1fbf4163313351b2fc93de5220677674539714" Mar 12 21:26:50.056033 master-0 kubenswrapper[31456]: I0312 21:26:50.047880 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:50.073698 master-0 kubenswrapper[31456]: I0312 21:26:50.061480 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:50.098855 master-0 kubenswrapper[31456]: I0312 21:26:50.090887 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.106877 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107469 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107484 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107508 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="cinder-volume" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107515 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="cinder-volume" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107533 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107539 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107567 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107573 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107580 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0e046b-34a2-4a0f-a4e6-87aad153b7a1" containerName="mariadb-database-create" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107589 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0e046b-34a2-4a0f-a4e6-87aad153b7a1" containerName="mariadb-database-create" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107597 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="cinder-backup" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107603 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="cinder-backup" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107615 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="cinder-scheduler" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107621 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="cinder-scheduler" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: E0312 21:26:50.107642 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ae05fd-97f9-4b9b-8067-70ef070e1de7" containerName="mariadb-account-create-update" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107648 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ae05fd-97f9-4b9b-8067-70ef070e1de7" containerName="mariadb-account-create-update" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107864 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0e046b-34a2-4a0f-a4e6-87aad153b7a1" containerName="mariadb-database-create" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107895 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="cinder-volume" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107903 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107919 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107930 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="30465684-0661-4306-8903-d8aa99f95fd7" containerName="cinder-backup" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107944 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ae05fd-97f9-4b9b-8067-70ef070e1de7" containerName="mariadb-account-create-update" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107958 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" containerName="probe" Mar 12 21:26:50.115849 master-0 kubenswrapper[31456]: I0312 21:26:50.107972 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" containerName="cinder-scheduler" Mar 12 21:26:50.132373 master-0 kubenswrapper[31456]: I0312 21:26:50.132315 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.135339 master-0 kubenswrapper[31456]: I0312 21:26:50.135287 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-volume-lvm-iscsi-config-data" Mar 12 21:26:50.170533 master-0 kubenswrapper[31456]: I0312 21:26:50.162762 31456 scope.go:117] "RemoveContainer" containerID="44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289" Mar 12 21:26:50.170533 master-0 kubenswrapper[31456]: I0312 21:26:50.165086 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200000 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-iscsi\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200063 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-config-data-custom\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200126 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-lib-modules\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200163 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqtd\" (UniqueName: \"kubernetes.io/projected/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-kube-api-access-6vqtd\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200181 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-locks-brick\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200210 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-config-data\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200228 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-locks-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200252 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-nvme\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200269 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-lib-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200285 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-combined-ca-bundle\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200318 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-run\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200342 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-machine-id\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200360 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-scripts\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200389 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-dev\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.204068 master-0 kubenswrapper[31456]: I0312 21:26:50.200406 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-sys\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.219893 master-0 kubenswrapper[31456]: I0312 21:26:50.219830 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:50.269452 master-0 kubenswrapper[31456]: I0312 21:26:50.269371 31456 scope.go:117] "RemoveContainer" containerID="75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7" Mar 12 21:26:50.287441 master-0 kubenswrapper[31456]: I0312 21:26:50.287366 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.302994 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-lib-modules\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303075 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vqtd\" (UniqueName: \"kubernetes.io/projected/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-kube-api-access-6vqtd\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303156 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-lib-modules\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303234 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-locks-brick\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303331 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-config-data\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303356 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-locks-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303409 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-nvme\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303440 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-lib-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303457 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-combined-ca-bundle\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.303603 master-0 kubenswrapper[31456]: I0312 21:26:50.303559 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-run\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.304086 master-0 kubenswrapper[31456]: I0312 21:26:50.303621 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-machine-id\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.304086 master-0 kubenswrapper[31456]: I0312 21:26:50.303660 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-scripts\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.304086 master-0 kubenswrapper[31456]: I0312 21:26:50.303739 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-dev\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.304086 master-0 kubenswrapper[31456]: I0312 21:26:50.303757 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-sys\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.304086 master-0 kubenswrapper[31456]: I0312 21:26:50.303921 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-iscsi\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.304086 master-0 kubenswrapper[31456]: I0312 21:26:50.303994 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-config-data-custom\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.306932 master-0 kubenswrapper[31456]: I0312 21:26:50.305116 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-machine-id\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.306932 master-0 kubenswrapper[31456]: I0312 21:26:50.305419 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-sys\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.306932 master-0 kubenswrapper[31456]: I0312 21:26:50.305451 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-dev\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.306932 master-0 kubenswrapper[31456]: I0312 21:26:50.305474 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-iscsi\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.309535 master-0 kubenswrapper[31456]: I0312 21:26:50.309489 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-locks-brick\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.309891 master-0 kubenswrapper[31456]: I0312 21:26:50.309857 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-etc-nvme\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.309947 master-0 kubenswrapper[31456]: I0312 21:26:50.309912 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-locks-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.309980 master-0 kubenswrapper[31456]: I0312 21:26:50.309951 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-var-lib-cinder\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.310249 master-0 kubenswrapper[31456]: I0312 21:26:50.310189 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-run\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.312852 master-0 kubenswrapper[31456]: I0312 21:26:50.312769 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-scripts\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.312956 master-0 kubenswrapper[31456]: I0312 21:26:50.312937 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-config-data-custom\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.313509 master-0 kubenswrapper[31456]: I0312 21:26:50.313474 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-config-data\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.326056 master-0 kubenswrapper[31456]: I0312 21:26:50.318607 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-combined-ca-bundle\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.326940 master-0 kubenswrapper[31456]: I0312 21:26:50.326833 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:50.326940 master-0 kubenswrapper[31456]: I0312 21:26:50.326896 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vqtd\" (UniqueName: \"kubernetes.io/projected/ca43b40c-f120-4ca2-bf2f-5b72af2082ce-kube-api-access-6vqtd\") pod \"cinder-7fa7f-volume-lvm-iscsi-0\" (UID: \"ca43b40c-f120-4ca2-bf2f-5b72af2082ce\") " pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.339493 master-0 kubenswrapper[31456]: I0312 21:26:50.339452 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:50.341663 master-0 kubenswrapper[31456]: I0312 21:26:50.341642 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.346717 master-0 kubenswrapper[31456]: I0312 21:26:50.344689 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-backup-config-data" Mar 12 21:26:50.417396 master-0 kubenswrapper[31456]: I0312 21:26:50.417305 31456 scope.go:117] "RemoveContainer" containerID="44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289" Mar 12 21:26:50.418532 master-0 kubenswrapper[31456]: E0312 21:26:50.418481 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289\": container with ID starting with 44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289 not found: ID does not exist" containerID="44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289" Mar 12 21:26:50.418586 master-0 kubenswrapper[31456]: I0312 21:26:50.418545 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289"} err="failed to get container status \"44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289\": rpc error: code = NotFound desc = could not find container \"44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289\": container with ID starting with 44889ab62c63c88e5177207d476d928aaaaa3af9df39f77d51329fcbe6d62289 not found: ID does not exist" Mar 12 21:26:50.418586 master-0 kubenswrapper[31456]: I0312 21:26:50.418571 31456 scope.go:117] "RemoveContainer" containerID="75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7" Mar 12 21:26:50.419558 master-0 kubenswrapper[31456]: I0312 21:26:50.419535 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-nvme\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.419610 master-0 kubenswrapper[31456]: I0312 21:26:50.419576 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-locks-brick\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.419651 master-0 kubenswrapper[31456]: I0312 21:26:50.419606 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-lib-modules\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.419685 master-0 kubenswrapper[31456]: I0312 21:26:50.419668 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-lib-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.419826 master-0 kubenswrapper[31456]: I0312 21:26:50.419810 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-config-data\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.419887 master-0 kubenswrapper[31456]: I0312 21:26:50.419869 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-sys\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.419971 master-0 kubenswrapper[31456]: I0312 21:26:50.419956 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-config-data-custom\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420012 master-0 kubenswrapper[31456]: I0312 21:26:50.420004 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-run\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420072 master-0 kubenswrapper[31456]: I0312 21:26:50.420057 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-scripts\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420107 master-0 kubenswrapper[31456]: I0312 21:26:50.420082 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-iscsi\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420155 master-0 kubenswrapper[31456]: I0312 21:26:50.420137 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-dev\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420199 master-0 kubenswrapper[31456]: I0312 21:26:50.420167 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-combined-ca-bundle\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420199 master-0 kubenswrapper[31456]: I0312 21:26:50.420189 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vs2d\" (UniqueName: \"kubernetes.io/projected/ff463ad5-abdf-4a29-8b11-3871edca3bd0-kube-api-access-8vs2d\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.420262 master-0 kubenswrapper[31456]: I0312 21:26:50.420208 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:50.421126 master-0 kubenswrapper[31456]: E0312 21:26:50.421083 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7\": container with ID starting with 75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7 not found: ID does not exist" containerID="75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7" Mar 12 21:26:50.421193 master-0 kubenswrapper[31456]: I0312 21:26:50.421136 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7"} err="failed to get container status \"75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7\": rpc error: code = NotFound desc = could not find container \"75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7\": container with ID starting with 75983916a5cc174bede5f0b3476a439a89ccff686eea72bfa84975a55e6386d7 not found: ID does not exist" Mar 12 21:26:50.422656 master-0 kubenswrapper[31456]: I0312 21:26:50.421416 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-machine-id\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.422656 master-0 kubenswrapper[31456]: I0312 21:26:50.421501 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-locks-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.437433 master-0 kubenswrapper[31456]: I0312 21:26:50.437373 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:50.439669 master-0 kubenswrapper[31456]: I0312 21:26:50.439649 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.441860 master-0 kubenswrapper[31456]: I0312 21:26:50.441821 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-7fa7f-scheduler-config-data" Mar 12 21:26:50.471226 master-0 kubenswrapper[31456]: I0312 21:26:50.471190 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524085 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-config-data-custom\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524258 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df2b019b-707a-46f9-b604-0a23165f07a6-etc-machine-id\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524417 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-machine-id\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524479 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-scripts\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524519 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-locks-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524521 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-machine-id\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524540 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-config-data\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524622 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-locks-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524791 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-nvme\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.524881 master-0 kubenswrapper[31456]: I0312 21:26:50.524878 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-locks-brick\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.524922 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-lib-modules\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.524993 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-nvme\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.524997 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-lib-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525066 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-combined-ca-bundle\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525081 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-lib-cinder\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525112 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-config-data\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525132 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-var-locks-brick\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525142 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-sys\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525178 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-sys\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525198 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-lib-modules\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525232 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2cq2\" (UniqueName: \"kubernetes.io/projected/df2b019b-707a-46f9-b604-0a23165f07a6-kube-api-access-h2cq2\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525276 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-config-data-custom\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525327 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-run\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525368 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-scripts\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525390 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-iscsi\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.525431 master-0 kubenswrapper[31456]: I0312 21:26:50.525426 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-dev\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.526062 master-0 kubenswrapper[31456]: I0312 21:26:50.525447 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vs2d\" (UniqueName: \"kubernetes.io/projected/ff463ad5-abdf-4a29-8b11-3871edca3bd0-kube-api-access-8vs2d\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.526062 master-0 kubenswrapper[31456]: I0312 21:26:50.525474 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-combined-ca-bundle\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.526262 master-0 kubenswrapper[31456]: I0312 21:26:50.526233 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-etc-iscsi\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.526322 master-0 kubenswrapper[31456]: I0312 21:26:50.526284 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-run\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.526528 master-0 kubenswrapper[31456]: I0312 21:26:50.526498 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ff463ad5-abdf-4a29-8b11-3871edca3bd0-dev\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.528797 master-0 kubenswrapper[31456]: I0312 21:26:50.528769 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-config-data\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.528910 master-0 kubenswrapper[31456]: I0312 21:26:50.528884 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-combined-ca-bundle\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.529328 master-0 kubenswrapper[31456]: I0312 21:26:50.529309 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-scripts\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.530043 master-0 kubenswrapper[31456]: I0312 21:26:50.530009 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff463ad5-abdf-4a29-8b11-3871edca3bd0-config-data-custom\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.548679 master-0 kubenswrapper[31456]: I0312 21:26:50.548643 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vs2d\" (UniqueName: \"kubernetes.io/projected/ff463ad5-abdf-4a29-8b11-3871edca3bd0-kube-api-access-8vs2d\") pod \"cinder-7fa7f-backup-0\" (UID: \"ff463ad5-abdf-4a29-8b11-3871edca3bd0\") " pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.550931 master-0 kubenswrapper[31456]: I0312 21:26:50.550891 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.630755 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-config-data-custom\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.630836 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df2b019b-707a-46f9-b604-0a23165f07a6-etc-machine-id\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.630881 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-scripts\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.630901 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-config-data\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.630965 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-combined-ca-bundle\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.631000 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2cq2\" (UniqueName: \"kubernetes.io/projected/df2b019b-707a-46f9-b604-0a23165f07a6-kube-api-access-h2cq2\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.635823 master-0 kubenswrapper[31456]: I0312 21:26:50.631820 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df2b019b-707a-46f9-b604-0a23165f07a6-etc-machine-id\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.636227 master-0 kubenswrapper[31456]: I0312 21:26:50.635989 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-combined-ca-bundle\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.637388 master-0 kubenswrapper[31456]: I0312 21:26:50.636657 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-config-data-custom\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.637388 master-0 kubenswrapper[31456]: I0312 21:26:50.636668 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-scripts\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.640968 master-0 kubenswrapper[31456]: I0312 21:26:50.640677 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2b019b-707a-46f9-b604-0a23165f07a6-config-data\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.650892 master-0 kubenswrapper[31456]: I0312 21:26:50.650819 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2cq2\" (UniqueName: \"kubernetes.io/projected/df2b019b-707a-46f9-b604-0a23165f07a6-kube-api-access-h2cq2\") pod \"cinder-7fa7f-scheduler-0\" (UID: \"df2b019b-707a-46f9-b604-0a23165f07a6\") " pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.751639 master-0 kubenswrapper[31456]: I0312 21:26:50.751557 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:50.772035 master-0 kubenswrapper[31456]: I0312 21:26:50.771968 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:50.917038 master-0 kubenswrapper[31456]: I0312 21:26:50.916337 31456 generic.go:334] "Generic (PLEG): container finished" podID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerID="15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae" exitCode=0 Mar 12 21:26:50.917038 master-0 kubenswrapper[31456]: I0312 21:26:50.916436 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerDied","Data":"15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae"} Mar 12 21:26:50.922623 master-0 kubenswrapper[31456]: I0312 21:26:50.922573 31456 generic.go:334] "Generic (PLEG): container finished" podID="6d868331-ae79-4015-8f7b-c0aed1d33312" containerID="885dd1a3547de6a8e0fd3aaf650e04ef91bcb325f7a3bbd1e88a65dd6cac2743" exitCode=0 Mar 12 21:26:50.922733 master-0 kubenswrapper[31456]: I0312 21:26:50.922662 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b47877c79-c5fvh" event={"ID":"6d868331-ae79-4015-8f7b-c0aed1d33312","Type":"ContainerDied","Data":"885dd1a3547de6a8e0fd3aaf650e04ef91bcb325f7a3bbd1e88a65dd6cac2743"} Mar 12 21:26:50.930737 master-0 kubenswrapper[31456]: I0312 21:26:50.930666 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"bc46bea3c2ffc8f29c3e2e2fa50d428a0f1e30d14168e9b85a9821d1fc3cc5a1"} Mar 12 21:26:51.109456 master-0 kubenswrapper[31456]: I0312 21:26:51.108660 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-volume-lvm-iscsi-0"] Mar 12 21:26:51.191321 master-0 kubenswrapper[31456]: I0312 21:26:51.190435 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30465684-0661-4306-8903-d8aa99f95fd7" path="/var/lib/kubelet/pods/30465684-0661-4306-8903-d8aa99f95fd7/volumes" Mar 12 21:26:51.191321 master-0 kubenswrapper[31456]: I0312 21:26:51.191127 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e93241-daea-4fbc-b947-8edb8b8ea521" path="/var/lib/kubelet/pods/87e93241-daea-4fbc-b947-8edb8b8ea521/volumes" Mar 12 21:26:51.193379 master-0 kubenswrapper[31456]: I0312 21:26:51.192729 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2f5eb4-3eff-4449-829b-2701ab9b6965" path="/var/lib/kubelet/pods/8a2f5eb4-3eff-4449-829b-2701ab9b6965/volumes" Mar 12 21:26:51.428983 master-0 kubenswrapper[31456]: I0312 21:26:51.428494 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-backup-0"] Mar 12 21:26:51.470035 master-0 kubenswrapper[31456]: I0312 21:26:51.467651 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7fa7f-scheduler-0"] Mar 12 21:26:51.965978 master-0 kubenswrapper[31456]: I0312 21:26:51.965912 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"ff463ad5-abdf-4a29-8b11-3871edca3bd0","Type":"ContainerStarted","Data":"73462012e650dbbb77115a84a3691a66d295e5dd1994586c7223f2a7c7f02c06"} Mar 12 21:26:51.965978 master-0 kubenswrapper[31456]: I0312 21:26:51.965980 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"ff463ad5-abdf-4a29-8b11-3871edca3bd0","Type":"ContainerStarted","Data":"bdc8e8f95bcfabfa16728bf1c3b6cc2fea262c4baf5237180f1049f3962cc7ba"} Mar 12 21:26:51.975212 master-0 kubenswrapper[31456]: I0312 21:26:51.975150 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"ca43b40c-f120-4ca2-bf2f-5b72af2082ce","Type":"ContainerStarted","Data":"10fc509ce59884065fec2ec93860f6e9730410afed04ef87a3d0c810eb1cc828"} Mar 12 21:26:51.975212 master-0 kubenswrapper[31456]: I0312 21:26:51.975206 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"ca43b40c-f120-4ca2-bf2f-5b72af2082ce","Type":"ContainerStarted","Data":"0dde417674f01785bc286077a87bf88d7ed661e117a23b0e1f23e6debdd499d6"} Mar 12 21:26:51.975212 master-0 kubenswrapper[31456]: I0312 21:26:51.975216 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" event={"ID":"ca43b40c-f120-4ca2-bf2f-5b72af2082ce","Type":"ContainerStarted","Data":"068c7717bcc531df4f85e1cd11acca453dfab4ab680c510242080244f7d78898"} Mar 12 21:26:51.981609 master-0 kubenswrapper[31456]: I0312 21:26:51.981546 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"df2b019b-707a-46f9-b604-0a23165f07a6","Type":"ContainerStarted","Data":"e8f4e815465791bd423307fe7b06a71817bfca365b0bdf3fd7c25075a1a5d3c4"} Mar 12 21:26:52.005800 master-0 kubenswrapper[31456]: I0312 21:26:52.002452 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b47877c79-c5fvh" event={"ID":"6d868331-ae79-4015-8f7b-c0aed1d33312","Type":"ContainerStarted","Data":"d3e2acb9ffa4b93608121a9d0e73ddb590b02c36e3c94ecbc52717a926507f41"} Mar 12 21:26:52.005800 master-0 kubenswrapper[31456]: I0312 21:26:52.002505 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b47877c79-c5fvh" event={"ID":"6d868331-ae79-4015-8f7b-c0aed1d33312","Type":"ContainerStarted","Data":"337cc03772d0d924ec94d1344b12a0bbde94eb05b48a1b03040357554d9f4340"} Mar 12 21:26:52.005800 master-0 kubenswrapper[31456]: I0312 21:26:52.003933 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:52.008755 master-0 kubenswrapper[31456]: I0312 21:26:52.008646 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" podStartSLOduration=2.00863496 podStartE2EDuration="2.00863496s" podCreationTimestamp="2026-03-12 21:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:52.002259976 +0000 UTC m=+1073.076865304" watchObservedRunningTime="2026-03-12 21:26:52.00863496 +0000 UTC m=+1073.083240288" Mar 12 21:26:52.020750 master-0 kubenswrapper[31456]: I0312 21:26:52.019289 31456 generic.go:334] "Generic (PLEG): container finished" podID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerID="dd7904507dc97baee8a29f70f54bc90c52e5da6a42d8ac35617b6b2ca915d4b2" exitCode=1 Mar 12 21:26:52.020750 master-0 kubenswrapper[31456]: I0312 21:26:52.020538 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerDied","Data":"dd7904507dc97baee8a29f70f54bc90c52e5da6a42d8ac35617b6b2ca915d4b2"} Mar 12 21:26:52.020750 master-0 kubenswrapper[31456]: I0312 21:26:52.020588 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerStarted","Data":"a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf"} Mar 12 21:26:52.021582 master-0 kubenswrapper[31456]: I0312 21:26:52.021543 31456 scope.go:117] "RemoveContainer" containerID="dd7904507dc97baee8a29f70f54bc90c52e5da6a42d8ac35617b6b2ca915d4b2" Mar 12 21:26:52.067576 master-0 kubenswrapper[31456]: I0312 21:26:52.067492 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-b47877c79-c5fvh" podStartSLOduration=5.067472925 podStartE2EDuration="5.067472925s" podCreationTimestamp="2026-03-12 21:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:52.039675752 +0000 UTC m=+1073.114281090" watchObservedRunningTime="2026-03-12 21:26:52.067472925 +0000 UTC m=+1073.142078253" Mar 12 21:26:52.089370 master-0 kubenswrapper[31456]: I0312 21:26:52.088637 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:53.094891 master-0 kubenswrapper[31456]: I0312 21:26:53.089697 31456 generic.go:334] "Generic (PLEG): container finished" podID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" exitCode=1 Mar 12 21:26:53.094891 master-0 kubenswrapper[31456]: I0312 21:26:53.090301 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerDied","Data":"4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1"} Mar 12 21:26:53.094891 master-0 kubenswrapper[31456]: I0312 21:26:53.090361 31456 scope.go:117] "RemoveContainer" containerID="dd7904507dc97baee8a29f70f54bc90c52e5da6a42d8ac35617b6b2ca915d4b2" Mar 12 21:26:53.094891 master-0 kubenswrapper[31456]: I0312 21:26:53.090582 31456 scope.go:117] "RemoveContainer" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:26:53.094891 master-0 kubenswrapper[31456]: E0312 21:26:53.090838 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6fd7f8b47c-vnhs9_openstack(da04713b-ad0b-4167-8fd7-59bbf482eff1)\"" pod="openstack/ironic-6fd7f8b47c-vnhs9" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" Mar 12 21:26:53.099007 master-0 kubenswrapper[31456]: I0312 21:26:53.097796 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-backup-0" event={"ID":"ff463ad5-abdf-4a29-8b11-3871edca3bd0","Type":"ContainerStarted","Data":"6df865fc4dcaeb69c86875d3fbee1e852a2719bdc13773d7d2d23865394c343d"} Mar 12 21:26:53.105839 master-0 kubenswrapper[31456]: I0312 21:26:53.103428 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"df2b019b-707a-46f9-b604-0a23165f07a6","Type":"ContainerStarted","Data":"1d6fd68b55ec6c81bc8266cde105937b6008c16112cf9b6a2afdd59856056ab4"} Mar 12 21:26:53.105839 master-0 kubenswrapper[31456]: I0312 21:26:53.103466 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7fa7f-scheduler-0" event={"ID":"df2b019b-707a-46f9-b604-0a23165f07a6","Type":"ContainerStarted","Data":"cae7f69a1cdb0a43dfe74681bab9e8f36e84e94ca3eadf341b2fe052806c425c"} Mar 12 21:26:53.227245 master-0 kubenswrapper[31456]: I0312 21:26:53.227134 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-backup-0" podStartSLOduration=3.227114991 podStartE2EDuration="3.227114991s" podCreationTimestamp="2026-03-12 21:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:53.222885069 +0000 UTC m=+1074.297490397" watchObservedRunningTime="2026-03-12 21:26:53.227114991 +0000 UTC m=+1074.301720319" Mar 12 21:26:53.263322 master-0 kubenswrapper[31456]: I0312 21:26:53.263129 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7fa7f-scheduler-0" podStartSLOduration=3.263104082 podStartE2EDuration="3.263104082s" podCreationTimestamp="2026-03-12 21:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:26:53.247008282 +0000 UTC m=+1074.321613610" watchObservedRunningTime="2026-03-12 21:26:53.263104082 +0000 UTC m=+1074.337709410" Mar 12 21:26:53.776825 master-0 kubenswrapper[31456]: E0312 21:26:53.776726 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" cmd=["/bin/true"] Mar 12 21:26:53.777063 master-0 kubenswrapper[31456]: E0312 21:26:53.777018 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" cmd=["/bin/true"] Mar 12 21:26:53.777104 master-0 kubenswrapper[31456]: E0312 21:26:53.777065 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" cmd=["/bin/true"] Mar 12 21:26:53.777366 master-0 kubenswrapper[31456]: E0312 21:26:53.777332 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" cmd=["/bin/true"] Mar 12 21:26:53.777431 master-0 kubenswrapper[31456]: E0312 21:26:53.777363 31456 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podUID="33f0319b-6d84-4282-bbb5-9636e1b62647" containerName="ironic-neutron-agent" Mar 12 21:26:53.777616 master-0 kubenswrapper[31456]: E0312 21:26:53.777587 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" cmd=["/bin/true"] Mar 12 21:26:53.777854 master-0 kubenswrapper[31456]: E0312 21:26:53.777803 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" cmd=["/bin/true"] Mar 12 21:26:53.777925 master-0 kubenswrapper[31456]: E0312 21:26:53.777852 31456 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podUID="33f0319b-6d84-4282-bbb5-9636e1b62647" containerName="ironic-neutron-agent" Mar 12 21:26:53.937909 master-0 kubenswrapper[31456]: I0312 21:26:53.937178 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-9f5c477c4-jk268" Mar 12 21:26:54.146900 master-0 kubenswrapper[31456]: I0312 21:26:54.146856 31456 generic.go:334] "Generic (PLEG): container finished" podID="33f0319b-6d84-4282-bbb5-9636e1b62647" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" exitCode=1 Mar 12 21:26:54.147575 master-0 kubenswrapper[31456]: I0312 21:26:54.147123 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" event={"ID":"33f0319b-6d84-4282-bbb5-9636e1b62647","Type":"ContainerDied","Data":"a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093"} Mar 12 21:26:54.148866 master-0 kubenswrapper[31456]: I0312 21:26:54.148850 31456 scope.go:117] "RemoveContainer" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" Mar 12 21:26:54.168171 master-0 kubenswrapper[31456]: I0312 21:26:54.167895 31456 scope.go:117] "RemoveContainer" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:26:54.169460 master-0 kubenswrapper[31456]: E0312 21:26:54.168499 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6fd7f8b47c-vnhs9_openstack(da04713b-ad0b-4167-8fd7-59bbf482eff1)\"" pod="openstack/ironic-6fd7f8b47c-vnhs9" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" Mar 12 21:26:54.190089 master-0 kubenswrapper[31456]: I0312 21:26:54.188019 31456 generic.go:334] "Generic (PLEG): container finished" podID="93110548-5710-4149-bd72-8e42693c948e" containerID="bc46bea3c2ffc8f29c3e2e2fa50d428a0f1e30d14168e9b85a9821d1fc3cc5a1" exitCode=0 Mar 12 21:26:54.190436 master-0 kubenswrapper[31456]: I0312 21:26:54.188240 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerDied","Data":"bc46bea3c2ffc8f29c3e2e2fa50d428a0f1e30d14168e9b85a9821d1fc3cc5a1"} Mar 12 21:26:54.277862 master-0 kubenswrapper[31456]: I0312 21:26:54.277794 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:26:54.424010 master-0 kubenswrapper[31456]: I0312 21:26:54.416094 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb965499f-tgbww"] Mar 12 21:26:54.424010 master-0 kubenswrapper[31456]: I0312 21:26:54.416432 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerName="dnsmasq-dns" containerID="cri-o://c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e" gracePeriod=10 Mar 12 21:26:54.599977 master-0 kubenswrapper[31456]: I0312 21:26:54.586981 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:54.621917 master-0 kubenswrapper[31456]: I0312 21:26:54.613435 31456 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:26:54.744415 master-0 kubenswrapper[31456]: I0312 21:26:54.744235 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-7fa7f-api-0" Mar 12 21:26:55.211954 master-0 kubenswrapper[31456]: I0312 21:26:55.208074 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:26:55.228321 master-0 kubenswrapper[31456]: I0312 21:26:55.228260 31456 generic.go:334] "Generic (PLEG): container finished" podID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerID="c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e" exitCode=0 Mar 12 21:26:55.228550 master-0 kubenswrapper[31456]: I0312 21:26:55.228407 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" Mar 12 21:26:55.243283 master-0 kubenswrapper[31456]: I0312 21:26:55.242151 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" event={"ID":"dfcccd02-54d3-4d3c-ab23-4a94d72774b2","Type":"ContainerDied","Data":"c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e"} Mar 12 21:26:55.243283 master-0 kubenswrapper[31456]: I0312 21:26:55.242211 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb965499f-tgbww" event={"ID":"dfcccd02-54d3-4d3c-ab23-4a94d72774b2","Type":"ContainerDied","Data":"2da74ba708c3679ae6eb7bd863add43ee816ac1a7530ca5d3db711be1f8d4ee8"} Mar 12 21:26:55.243283 master-0 kubenswrapper[31456]: I0312 21:26:55.242222 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" event={"ID":"33f0319b-6d84-4282-bbb5-9636e1b62647","Type":"ContainerStarted","Data":"25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055"} Mar 12 21:26:55.248966 master-0 kubenswrapper[31456]: I0312 21:26:55.243599 31456 scope.go:117] "RemoveContainer" containerID="c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e" Mar 12 21:26:55.248966 master-0 kubenswrapper[31456]: I0312 21:26:55.243948 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:26:55.248966 master-0 kubenswrapper[31456]: I0312 21:26:55.245208 31456 scope.go:117] "RemoveContainer" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:26:55.248966 master-0 kubenswrapper[31456]: E0312 21:26:55.247157 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6fd7f8b47c-vnhs9_openstack(da04713b-ad0b-4167-8fd7-59bbf482eff1)\"" pod="openstack/ironic-6fd7f8b47c-vnhs9" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" Mar 12 21:26:55.324658 master-0 kubenswrapper[31456]: I0312 21:26:55.324608 31456 scope.go:117] "RemoveContainer" containerID="ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a" Mar 12 21:26:55.366595 master-0 kubenswrapper[31456]: I0312 21:26:55.366541 31456 scope.go:117] "RemoveContainer" containerID="c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e" Mar 12 21:26:55.367310 master-0 kubenswrapper[31456]: E0312 21:26:55.367255 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e\": container with ID starting with c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e not found: ID does not exist" containerID="c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e" Mar 12 21:26:55.367381 master-0 kubenswrapper[31456]: I0312 21:26:55.367309 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e"} err="failed to get container status \"c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e\": rpc error: code = NotFound desc = could not find container \"c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e\": container with ID starting with c7300c8c12836196dfb71682830fc8ed4cb0c0cd765ecf8242f903039c038a8e not found: ID does not exist" Mar 12 21:26:55.367381 master-0 kubenswrapper[31456]: I0312 21:26:55.367337 31456 scope.go:117] "RemoveContainer" containerID="ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a" Mar 12 21:26:55.368147 master-0 kubenswrapper[31456]: E0312 21:26:55.368109 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a\": container with ID starting with ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a not found: ID does not exist" containerID="ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a" Mar 12 21:26:55.368215 master-0 kubenswrapper[31456]: I0312 21:26:55.368160 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a"} err="failed to get container status \"ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a\": rpc error: code = NotFound desc = could not find container \"ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a\": container with ID starting with ae82d49b9f45e1fc7b9ece309ab0ff32c0dae96c709845d6108c6aed2f7f373a not found: ID does not exist" Mar 12 21:26:55.437407 master-0 kubenswrapper[31456]: I0312 21:26:55.436436 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-config\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.437407 master-0 kubenswrapper[31456]: I0312 21:26:55.436734 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p52vc\" (UniqueName: \"kubernetes.io/projected/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-kube-api-access-p52vc\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.437407 master-0 kubenswrapper[31456]: I0312 21:26:55.436792 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-swift-storage-0\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.437407 master-0 kubenswrapper[31456]: I0312 21:26:55.436951 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.437407 master-0 kubenswrapper[31456]: I0312 21:26:55.437251 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-svc\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.437407 master-0 kubenswrapper[31456]: I0312 21:26:55.437327 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-sb\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.487959 master-0 kubenswrapper[31456]: I0312 21:26:55.487879 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-kube-api-access-p52vc" (OuterVolumeSpecName: "kube-api-access-p52vc") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "kube-api-access-p52vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:26:55.511525 master-0 kubenswrapper[31456]: I0312 21:26:55.511462 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:55.546702 master-0 kubenswrapper[31456]: I0312 21:26:55.546459 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-config" (OuterVolumeSpecName: "config") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:55.547100 master-0 kubenswrapper[31456]: I0312 21:26:55.547067 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:55.551705 master-0 kubenswrapper[31456]: I0312 21:26:55.551644 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:26:55.562109 master-0 kubenswrapper[31456]: I0312 21:26:55.561991 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:55.571152 master-0 kubenswrapper[31456]: I0312 21:26:55.562450 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:55.571152 master-0 kubenswrapper[31456]: I0312 21:26:55.562475 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p52vc\" (UniqueName: \"kubernetes.io/projected/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-kube-api-access-p52vc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:55.571152 master-0 kubenswrapper[31456]: I0312 21:26:55.562484 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:55.571152 master-0 kubenswrapper[31456]: I0312 21:26:55.562494 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:55.571152 master-0 kubenswrapper[31456]: I0312 21:26:55.562506 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:55.666107 master-0 kubenswrapper[31456]: I0312 21:26:55.664505 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:55.666107 master-0 kubenswrapper[31456]: I0312 21:26:55.665335 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb\") pod \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\" (UID: \"dfcccd02-54d3-4d3c-ab23-4a94d72774b2\") " Mar 12 21:26:55.666107 master-0 kubenswrapper[31456]: W0312 21:26:55.665455 31456 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dfcccd02-54d3-4d3c-ab23-4a94d72774b2/volumes/kubernetes.io~configmap/ovsdbserver-nb Mar 12 21:26:55.666107 master-0 kubenswrapper[31456]: I0312 21:26:55.665467 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfcccd02-54d3-4d3c-ab23-4a94d72774b2" (UID: "dfcccd02-54d3-4d3c-ab23-4a94d72774b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:26:55.670485 master-0 kubenswrapper[31456]: I0312 21:26:55.666656 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcccd02-54d3-4d3c-ab23-4a94d72774b2-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:26:55.752048 master-0 kubenswrapper[31456]: I0312 21:26:55.751982 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:26:55.772926 master-0 kubenswrapper[31456]: I0312 21:26:55.772854 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:26:55.909100 master-0 kubenswrapper[31456]: I0312 21:26:55.908967 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb965499f-tgbww"] Mar 12 21:26:55.921305 master-0 kubenswrapper[31456]: I0312 21:26:55.921164 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fb965499f-tgbww"] Mar 12 21:26:56.252726 master-0 kubenswrapper[31456]: I0312 21:26:56.252568 31456 scope.go:117] "RemoveContainer" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:26:56.253386 master-0 kubenswrapper[31456]: E0312 21:26:56.252943 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-6fd7f8b47c-vnhs9_openstack(da04713b-ad0b-4167-8fd7-59bbf482eff1)\"" pod="openstack/ironic-6fd7f8b47c-vnhs9" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" Mar 12 21:26:56.972913 master-0 kubenswrapper[31456]: I0312 21:26:56.972774 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-b47877c79-c5fvh" Mar 12 21:26:57.199902 master-0 kubenswrapper[31456]: I0312 21:26:57.197866 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" path="/var/lib/kubelet/pods/dfcccd02-54d3-4d3c-ab23-4a94d72774b2/volumes" Mar 12 21:26:57.199902 master-0 kubenswrapper[31456]: I0312 21:26:57.199523 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6fd7f8b47c-vnhs9"] Mar 12 21:26:57.273990 master-0 kubenswrapper[31456]: I0312 21:26:57.273876 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-6fd7f8b47c-vnhs9" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api-log" containerID="cri-o://a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf" gracePeriod=60 Mar 12 21:26:58.558022 master-0 kubenswrapper[31456]: I0312 21:26:58.557932 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 21:26:58.558830 master-0 kubenswrapper[31456]: E0312 21:26:58.558768 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerName="init" Mar 12 21:26:58.558830 master-0 kubenswrapper[31456]: I0312 21:26:58.558788 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerName="init" Mar 12 21:26:58.558920 master-0 kubenswrapper[31456]: E0312 21:26:58.558827 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerName="dnsmasq-dns" Mar 12 21:26:58.558920 master-0 kubenswrapper[31456]: I0312 21:26:58.558853 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerName="dnsmasq-dns" Mar 12 21:26:58.562151 master-0 kubenswrapper[31456]: I0312 21:26:58.561402 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcccd02-54d3-4d3c-ab23-4a94d72774b2" containerName="dnsmasq-dns" Mar 12 21:26:58.564206 master-0 kubenswrapper[31456]: I0312 21:26:58.564168 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 21:26:58.572595 master-0 kubenswrapper[31456]: I0312 21:26:58.572468 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 21:26:58.572673 master-0 kubenswrapper[31456]: I0312 21:26:58.572593 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 21:26:58.580017 master-0 kubenswrapper[31456]: I0312 21:26:58.578387 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 21:26:58.590050 master-0 kubenswrapper[31456]: I0312 21:26:58.589978 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-6ldcl"] Mar 12 21:26:58.591771 master-0 kubenswrapper[31456]: I0312 21:26:58.591740 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.597326 master-0 kubenswrapper[31456]: I0312 21:26:58.597267 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 21:26:58.597637 master-0 kubenswrapper[31456]: I0312 21:26:58.597609 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 21:26:58.613718 master-0 kubenswrapper[31456]: I0312 21:26:58.613650 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-scripts\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.613718 master-0 kubenswrapper[31456]: I0312 21:26:58.613705 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ad53ea-17b7-4691-a8dc-865ebf143679-combined-ca-bundle\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.613939 master-0 kubenswrapper[31456]: I0312 21:26:58.613736 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kcp\" (UniqueName: \"kubernetes.io/projected/3c8c121d-9b72-44d7-af67-27dd9476ba5e-kube-api-access-55kcp\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.613800 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/80ad53ea-17b7-4691-a8dc-865ebf143679-openstack-config-secret\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617425 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfq5j\" (UniqueName: \"kubernetes.io/projected/80ad53ea-17b7-4691-a8dc-865ebf143679-kube-api-access-dfq5j\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617516 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c8c121d-9b72-44d7-af67-27dd9476ba5e-etc-podinfo\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617544 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/80ad53ea-17b7-4691-a8dc-865ebf143679-openstack-config\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617580 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-config\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617600 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-combined-ca-bundle\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617696 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.619002 master-0 kubenswrapper[31456]: I0312 21:26:58.617855 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.626555 master-0 kubenswrapper[31456]: I0312 21:26:58.626481 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:26:58.663698 master-0 kubenswrapper[31456]: I0312 21:26:58.663302 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-6ldcl"] Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.721892 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-scripts\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.721997 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ad53ea-17b7-4691-a8dc-865ebf143679-combined-ca-bundle\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722045 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kcp\" (UniqueName: \"kubernetes.io/projected/3c8c121d-9b72-44d7-af67-27dd9476ba5e-kube-api-access-55kcp\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722156 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/80ad53ea-17b7-4691-a8dc-865ebf143679-openstack-config-secret\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722228 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfq5j\" (UniqueName: \"kubernetes.io/projected/80ad53ea-17b7-4691-a8dc-865ebf143679-kube-api-access-dfq5j\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722284 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c8c121d-9b72-44d7-af67-27dd9476ba5e-etc-podinfo\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722308 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/80ad53ea-17b7-4691-a8dc-865ebf143679-openstack-config\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722336 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-config\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722357 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-combined-ca-bundle\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.723337 master-0 kubenswrapper[31456]: I0312 21:26:58.722422 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.725472 master-0 kubenswrapper[31456]: I0312 21:26:58.725401 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-scripts\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.725603 master-0 kubenswrapper[31456]: I0312 21:26:58.725575 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.726131 master-0 kubenswrapper[31456]: I0312 21:26:58.726049 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.729121 master-0 kubenswrapper[31456]: I0312 21:26:58.729044 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.730031 master-0 kubenswrapper[31456]: I0312 21:26:58.729986 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ad53ea-17b7-4691-a8dc-865ebf143679-combined-ca-bundle\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.734472 master-0 kubenswrapper[31456]: I0312 21:26:58.733270 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/80ad53ea-17b7-4691-a8dc-865ebf143679-openstack-config-secret\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.736025 master-0 kubenswrapper[31456]: I0312 21:26:58.735994 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/80ad53ea-17b7-4691-a8dc-865ebf143679-openstack-config\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.751838 master-0 kubenswrapper[31456]: I0312 21:26:58.740400 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c8c121d-9b72-44d7-af67-27dd9476ba5e-etc-podinfo\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.751838 master-0 kubenswrapper[31456]: I0312 21:26:58.742881 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfq5j\" (UniqueName: \"kubernetes.io/projected/80ad53ea-17b7-4691-a8dc-865ebf143679-kube-api-access-dfq5j\") pod \"openstackclient\" (UID: \"80ad53ea-17b7-4691-a8dc-865ebf143679\") " pod="openstack/openstackclient" Mar 12 21:26:58.768126 master-0 kubenswrapper[31456]: I0312 21:26:58.762891 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kcp\" (UniqueName: \"kubernetes.io/projected/3c8c121d-9b72-44d7-af67-27dd9476ba5e-kube-api-access-55kcp\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.768126 master-0 kubenswrapper[31456]: I0312 21:26:58.764394 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-config\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.776598 master-0 kubenswrapper[31456]: E0312 21:26:58.776493 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" cmd=["/bin/true"] Mar 12 21:26:58.776598 master-0 kubenswrapper[31456]: E0312 21:26:58.776596 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" cmd=["/bin/true"] Mar 12 21:26:58.776854 master-0 kubenswrapper[31456]: E0312 21:26:58.776819 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" cmd=["/bin/true"] Mar 12 21:26:58.777283 master-0 kubenswrapper[31456]: E0312 21:26:58.776876 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" cmd=["/bin/true"] Mar 12 21:26:58.777388 master-0 kubenswrapper[31456]: E0312 21:26:58.777338 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" cmd=["/bin/true"] Mar 12 21:26:58.777434 master-0 kubenswrapper[31456]: E0312 21:26:58.777385 31456 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podUID="33f0319b-6d84-4282-bbb5-9636e1b62647" containerName="ironic-neutron-agent" Mar 12 21:26:58.778040 master-0 kubenswrapper[31456]: E0312 21:26:58.778001 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" cmd=["/bin/true"] Mar 12 21:26:58.778040 master-0 kubenswrapper[31456]: E0312 21:26:58.778029 31456 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podUID="33f0319b-6d84-4282-bbb5-9636e1b62647" containerName="ironic-neutron-agent" Mar 12 21:26:58.801788 master-0 kubenswrapper[31456]: I0312 21:26:58.792950 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-combined-ca-bundle\") pod \"ironic-inspector-db-sync-6ldcl\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:26:58.932496 master-0 kubenswrapper[31456]: I0312 21:26:58.932436 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 21:26:58.943384 master-0 kubenswrapper[31456]: I0312 21:26:58.943325 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:27:01.105707 master-0 kubenswrapper[31456]: I0312 21:27:01.105011 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-7fa7f-backup-0" Mar 12 21:27:01.105707 master-0 kubenswrapper[31456]: I0312 21:27:01.105176 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-7fa7f-volume-lvm-iscsi-0" Mar 12 21:27:01.182240 master-0 kubenswrapper[31456]: I0312 21:27:01.178959 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:27:01.217669 master-0 kubenswrapper[31456]: I0312 21:27:01.216833 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-794f5bbfcf-tg98t" Mar 12 21:27:01.278588 master-0 kubenswrapper[31456]: I0312 21:27:01.278363 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-merged\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.278588 master-0 kubenswrapper[31456]: I0312 21:27:01.278593 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279187 master-0 kubenswrapper[31456]: I0312 21:27:01.278626 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbkb4\" (UniqueName: \"kubernetes.io/projected/da04713b-ad0b-4167-8fd7-59bbf482eff1-kube-api-access-dbkb4\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279187 master-0 kubenswrapper[31456]: I0312 21:27:01.278706 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-combined-ca-bundle\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279187 master-0 kubenswrapper[31456]: I0312 21:27:01.278734 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da04713b-ad0b-4167-8fd7-59bbf482eff1-etc-podinfo\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279187 master-0 kubenswrapper[31456]: I0312 21:27:01.278786 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-custom\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279187 master-0 kubenswrapper[31456]: I0312 21:27:01.278817 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-logs\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279187 master-0 kubenswrapper[31456]: I0312 21:27:01.278843 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-scripts\") pod \"da04713b-ad0b-4167-8fd7-59bbf482eff1\" (UID: \"da04713b-ad0b-4167-8fd7-59bbf482eff1\") " Mar 12 21:27:01.279578 master-0 kubenswrapper[31456]: I0312 21:27:01.279231 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:01.292570 master-0 kubenswrapper[31456]: I0312 21:27:01.280615 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.292570 master-0 kubenswrapper[31456]: I0312 21:27:01.281162 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-logs" (OuterVolumeSpecName: "logs") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:01.300360 master-0 kubenswrapper[31456]: I0312 21:27:01.300299 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:01.301949 master-0 kubenswrapper[31456]: I0312 21:27:01.301654 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-scripts" (OuterVolumeSpecName: "scripts") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:01.313830 master-0 kubenswrapper[31456]: I0312 21:27:01.313598 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da04713b-ad0b-4167-8fd7-59bbf482eff1-kube-api-access-dbkb4" (OuterVolumeSpecName: "kube-api-access-dbkb4") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "kube-api-access-dbkb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:01.365857 master-0 kubenswrapper[31456]: I0312 21:27:01.363966 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/da04713b-ad0b-4167-8fd7-59bbf482eff1-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 21:27:01.389839 master-0 kubenswrapper[31456]: I0312 21:27:01.384913 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbkb4\" (UniqueName: \"kubernetes.io/projected/da04713b-ad0b-4167-8fd7-59bbf482eff1-kube-api-access-dbkb4\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.389839 master-0 kubenswrapper[31456]: I0312 21:27:01.384966 31456 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/da04713b-ad0b-4167-8fd7-59bbf482eff1-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.389839 master-0 kubenswrapper[31456]: I0312 21:27:01.384975 31456 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.389839 master-0 kubenswrapper[31456]: I0312 21:27:01.384984 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da04713b-ad0b-4167-8fd7-59bbf482eff1-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.389839 master-0 kubenswrapper[31456]: I0312 21:27:01.384993 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.413533 master-0 kubenswrapper[31456]: I0312 21:27:01.411584 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-7fa7f-scheduler-0" Mar 12 21:27:01.425094 master-0 kubenswrapper[31456]: I0312 21:27:01.424766 31456 generic.go:334] "Generic (PLEG): container finished" podID="33f0319b-6d84-4282-bbb5-9636e1b62647" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" exitCode=1 Mar 12 21:27:01.425094 master-0 kubenswrapper[31456]: I0312 21:27:01.424938 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" event={"ID":"33f0319b-6d84-4282-bbb5-9636e1b62647","Type":"ContainerDied","Data":"25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055"} Mar 12 21:27:01.425094 master-0 kubenswrapper[31456]: I0312 21:27:01.424981 31456 scope.go:117] "RemoveContainer" containerID="a57efbd9fcc283ffbfbfd22a2e39e643722f7875a04d9b1ef627b20a4b992093" Mar 12 21:27:01.425838 master-0 kubenswrapper[31456]: I0312 21:27:01.425799 31456 scope.go:117] "RemoveContainer" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" Mar 12 21:27:01.426153 master-0 kubenswrapper[31456]: E0312 21:27:01.426071 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-68659c9b47-m44wq_openstack(33f0319b-6d84-4282-bbb5-9636e1b62647)\"" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podUID="33f0319b-6d84-4282-bbb5-9636e1b62647" Mar 12 21:27:01.461425 master-0 kubenswrapper[31456]: I0312 21:27:01.449667 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data" (OuterVolumeSpecName: "config-data") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:01.461632 master-0 kubenswrapper[31456]: I0312 21:27:01.461528 31456 generic.go:334] "Generic (PLEG): container finished" podID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerID="a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf" exitCode=143 Mar 12 21:27:01.461632 master-0 kubenswrapper[31456]: I0312 21:27:01.461586 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerDied","Data":"a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf"} Mar 12 21:27:01.461632 master-0 kubenswrapper[31456]: I0312 21:27:01.461615 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6fd7f8b47c-vnhs9" event={"ID":"da04713b-ad0b-4167-8fd7-59bbf482eff1","Type":"ContainerDied","Data":"32d84c0efd3ee96984444b6c7ec0c6dc3cc3e498eb21ba4dcc512c83b39d1d14"} Mar 12 21:27:01.481207 master-0 kubenswrapper[31456]: I0312 21:27:01.461695 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6fd7f8b47c-vnhs9" Mar 12 21:27:01.481207 master-0 kubenswrapper[31456]: I0312 21:27:01.469226 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da04713b-ad0b-4167-8fd7-59bbf482eff1" (UID: "da04713b-ad0b-4167-8fd7-59bbf482eff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:01.486987 master-0 kubenswrapper[31456]: I0312 21:27:01.486932 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.486987 master-0 kubenswrapper[31456]: I0312 21:27:01.486976 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da04713b-ad0b-4167-8fd7-59bbf482eff1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:01.677831 master-0 kubenswrapper[31456]: I0312 21:27:01.669587 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-6ldcl"] Mar 12 21:27:01.686004 master-0 kubenswrapper[31456]: I0312 21:27:01.684981 31456 scope.go:117] "RemoveContainer" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:27:01.777148 master-0 kubenswrapper[31456]: I0312 21:27:01.777008 31456 scope.go:117] "RemoveContainer" containerID="a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf" Mar 12 21:27:01.804136 master-0 kubenswrapper[31456]: I0312 21:27:01.804076 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b7fc99fd8-pc4wq"] Mar 12 21:27:01.804373 master-0 kubenswrapper[31456]: I0312 21:27:01.804318 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b7fc99fd8-pc4wq" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-api" containerID="cri-o://f4a0172384f033272e2a0a23a455d0f73b3a58630e7b76c5147f00a0b1cb6fe8" gracePeriod=30 Mar 12 21:27:01.819836 master-0 kubenswrapper[31456]: I0312 21:27:01.804769 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b7fc99fd8-pc4wq" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-httpd" containerID="cri-o://f4238bf455a2a08c5c82da0b82cba6320522be3626dfeecaf288204b852636a7" gracePeriod=30 Mar 12 21:27:01.877507 master-0 kubenswrapper[31456]: I0312 21:27:01.870501 31456 scope.go:117] "RemoveContainer" containerID="15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae" Mar 12 21:27:01.934786 master-0 kubenswrapper[31456]: I0312 21:27:01.931567 31456 scope.go:117] "RemoveContainer" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:27:01.937227 master-0 kubenswrapper[31456]: I0312 21:27:01.937135 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 21:27:01.937440 master-0 kubenswrapper[31456]: E0312 21:27:01.937356 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1\": container with ID starting with 4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1 not found: ID does not exist" containerID="4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1" Mar 12 21:27:01.937497 master-0 kubenswrapper[31456]: I0312 21:27:01.937442 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1"} err="failed to get container status \"4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1\": rpc error: code = NotFound desc = could not find container \"4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1\": container with ID starting with 4c5219543a62de24861ea62e8d792fd8108c47ed818d078b611a5e3581272cc1 not found: ID does not exist" Mar 12 21:27:01.937497 master-0 kubenswrapper[31456]: I0312 21:27:01.937476 31456 scope.go:117] "RemoveContainer" containerID="a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf" Mar 12 21:27:01.938526 master-0 kubenswrapper[31456]: E0312 21:27:01.938460 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf\": container with ID starting with a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf not found: ID does not exist" containerID="a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf" Mar 12 21:27:01.938596 master-0 kubenswrapper[31456]: I0312 21:27:01.938541 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf"} err="failed to get container status \"a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf\": rpc error: code = NotFound desc = could not find container \"a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf\": container with ID starting with a4df6024d86f50b06d0907fd422ea14b0d3e1da6cee9106e936d7664f2224edf not found: ID does not exist" Mar 12 21:27:01.938596 master-0 kubenswrapper[31456]: I0312 21:27:01.938592 31456 scope.go:117] "RemoveContainer" containerID="15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae" Mar 12 21:27:01.939086 master-0 kubenswrapper[31456]: E0312 21:27:01.939018 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae\": container with ID starting with 15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae not found: ID does not exist" containerID="15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae" Mar 12 21:27:01.939164 master-0 kubenswrapper[31456]: I0312 21:27:01.939070 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae"} err="failed to get container status \"15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae\": rpc error: code = NotFound desc = could not find container \"15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae\": container with ID starting with 15be59d41f6408fc4417554c9bcfb6301fdf3380a2adc46875649717696d63ae not found: ID does not exist" Mar 12 21:27:02.037173 master-0 kubenswrapper[31456]: I0312 21:27:02.036141 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-6fd7f8b47c-vnhs9"] Mar 12 21:27:02.047235 master-0 kubenswrapper[31456]: I0312 21:27:02.047152 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-6fd7f8b47c-vnhs9"] Mar 12 21:27:02.354537 master-0 kubenswrapper[31456]: I0312 21:27:02.354380 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-86c6bb594-knx75"] Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: E0312 21:27:02.355133 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api-log" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355157 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api-log" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: E0312 21:27:02.355177 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="init" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355185 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="init" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: E0312 21:27:02.355218 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355226 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355457 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355481 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355506 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api-log" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: E0312 21:27:02.355708 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.355717 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" containerName="ironic-api" Mar 12 21:27:02.357873 master-0 kubenswrapper[31456]: I0312 21:27:02.356618 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.363508 master-0 kubenswrapper[31456]: I0312 21:27:02.363399 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 21:27:02.363662 master-0 kubenswrapper[31456]: I0312 21:27:02.363532 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 21:27:02.364152 master-0 kubenswrapper[31456]: I0312 21:27:02.364116 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 21:27:02.415088 master-0 kubenswrapper[31456]: I0312 21:27:02.415020 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86c6bb594-knx75"] Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439166 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-combined-ca-bundle\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439232 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddd2501c-2ca8-4845-970a-90983db6ae0b-log-httpd\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439259 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ddd2501c-2ca8-4845-970a-90983db6ae0b-etc-swift\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439296 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-public-tls-certs\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439350 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4wp\" (UniqueName: \"kubernetes.io/projected/ddd2501c-2ca8-4845-970a-90983db6ae0b-kube-api-access-lr4wp\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439370 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddd2501c-2ca8-4845-970a-90983db6ae0b-run-httpd\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439397 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-config-data\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.439825 master-0 kubenswrapper[31456]: I0312 21:27:02.439501 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-internal-tls-certs\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.504834 master-0 kubenswrapper[31456]: I0312 21:27:02.490023 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"80ad53ea-17b7-4691-a8dc-865ebf143679","Type":"ContainerStarted","Data":"8f4a7f07368d4ac9b0c258a2a852b4bebfb8dfef047058505a557db00383761e"} Mar 12 21:27:02.523823 master-0 kubenswrapper[31456]: I0312 21:27:02.521530 31456 generic.go:334] "Generic (PLEG): container finished" podID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerID="f4238bf455a2a08c5c82da0b82cba6320522be3626dfeecaf288204b852636a7" exitCode=0 Mar 12 21:27:02.523823 master-0 kubenswrapper[31456]: I0312 21:27:02.521901 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b7fc99fd8-pc4wq" event={"ID":"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0","Type":"ContainerDied","Data":"f4238bf455a2a08c5c82da0b82cba6320522be3626dfeecaf288204b852636a7"} Mar 12 21:27:02.539823 master-0 kubenswrapper[31456]: I0312 21:27:02.533962 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-6ldcl" event={"ID":"3c8c121d-9b72-44d7-af67-27dd9476ba5e","Type":"ContainerStarted","Data":"9117748ddfab8fe9d7f7bd15e5409a14b72d296e554a2eece1d90d2b4b2e2444"} Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548251 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4wp\" (UniqueName: \"kubernetes.io/projected/ddd2501c-2ca8-4845-970a-90983db6ae0b-kube-api-access-lr4wp\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548339 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddd2501c-2ca8-4845-970a-90983db6ae0b-run-httpd\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548396 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-config-data\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548595 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-internal-tls-certs\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548663 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-combined-ca-bundle\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548693 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddd2501c-2ca8-4845-970a-90983db6ae0b-log-httpd\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548727 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ddd2501c-2ca8-4845-970a-90983db6ae0b-etc-swift\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548768 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-public-tls-certs\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.548865 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddd2501c-2ca8-4845-970a-90983db6ae0b-run-httpd\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.551836 master-0 kubenswrapper[31456]: I0312 21:27:02.549152 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ddd2501c-2ca8-4845-970a-90983db6ae0b-log-httpd\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.565820 master-0 kubenswrapper[31456]: I0312 21:27:02.558709 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-public-tls-certs\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.565820 master-0 kubenswrapper[31456]: I0312 21:27:02.564566 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-internal-tls-certs\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.571823 master-0 kubenswrapper[31456]: I0312 21:27:02.570237 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-combined-ca-bundle\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.578827 master-0 kubenswrapper[31456]: I0312 21:27:02.574999 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd2501c-2ca8-4845-970a-90983db6ae0b-config-data\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.578827 master-0 kubenswrapper[31456]: I0312 21:27:02.576184 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ddd2501c-2ca8-4845-970a-90983db6ae0b-etc-swift\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.610926 master-0 kubenswrapper[31456]: I0312 21:27:02.610713 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4wp\" (UniqueName: \"kubernetes.io/projected/ddd2501c-2ca8-4845-970a-90983db6ae0b-kube-api-access-lr4wp\") pod \"swift-proxy-86c6bb594-knx75\" (UID: \"ddd2501c-2ca8-4845-970a-90983db6ae0b\") " pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:02.688871 master-0 kubenswrapper[31456]: I0312 21:27:02.688792 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:03.209836 master-0 kubenswrapper[31456]: I0312 21:27:03.209696 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da04713b-ad0b-4167-8fd7-59bbf482eff1" path="/var/lib/kubelet/pods/da04713b-ad0b-4167-8fd7-59bbf482eff1/volumes" Mar 12 21:27:03.278462 master-0 kubenswrapper[31456]: W0312 21:27:03.278413 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd2501c_2ca8_4845_970a_90983db6ae0b.slice/crio-97113963680ef1f32460a7efac3fd6a4ce5330d59209ac5ff314c1a3fe559d9c WatchSource:0}: Error finding container 97113963680ef1f32460a7efac3fd6a4ce5330d59209ac5ff314c1a3fe559d9c: Status 404 returned error can't find the container with id 97113963680ef1f32460a7efac3fd6a4ce5330d59209ac5ff314c1a3fe559d9c Mar 12 21:27:03.312632 master-0 kubenswrapper[31456]: I0312 21:27:03.312546 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86c6bb594-knx75"] Mar 12 21:27:03.564138 master-0 kubenswrapper[31456]: I0312 21:27:03.564047 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c6bb594-knx75" event={"ID":"ddd2501c-2ca8-4845-970a-90983db6ae0b","Type":"ContainerStarted","Data":"97113963680ef1f32460a7efac3fd6a4ce5330d59209ac5ff314c1a3fe559d9c"} Mar 12 21:27:03.775266 master-0 kubenswrapper[31456]: I0312 21:27:03.775217 31456 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:27:03.775950 master-0 kubenswrapper[31456]: I0312 21:27:03.775920 31456 scope.go:117] "RemoveContainer" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" Mar 12 21:27:03.776227 master-0 kubenswrapper[31456]: E0312 21:27:03.776182 31456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-68659c9b47-m44wq_openstack(33f0319b-6d84-4282-bbb5-9636e1b62647)\"" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" podUID="33f0319b-6d84-4282-bbb5-9636e1b62647" Mar 12 21:27:04.579108 master-0 kubenswrapper[31456]: I0312 21:27:04.579011 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c6bb594-knx75" event={"ID":"ddd2501c-2ca8-4845-970a-90983db6ae0b","Type":"ContainerStarted","Data":"7eb8c15f0310fdcdd554ae39da0f329a6529aced2df1abde5646e8c96d97c03d"} Mar 12 21:27:04.579108 master-0 kubenswrapper[31456]: I0312 21:27:04.579104 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86c6bb594-knx75" event={"ID":"ddd2501c-2ca8-4845-970a-90983db6ae0b","Type":"ContainerStarted","Data":"9effd8f9a5734cf8ae5af6435a948d7b392e3df5f383373da3972ca41e3a37da"} Mar 12 21:27:04.580141 master-0 kubenswrapper[31456]: I0312 21:27:04.580064 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:04.632218 master-0 kubenswrapper[31456]: I0312 21:27:04.632139 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-86c6bb594-knx75" podStartSLOduration=2.6321171960000003 podStartE2EDuration="2.632117196s" podCreationTimestamp="2026-03-12 21:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:04.628703873 +0000 UTC m=+1085.703309201" watchObservedRunningTime="2026-03-12 21:27:04.632117196 +0000 UTC m=+1085.706722524" Mar 12 21:27:05.601105 master-0 kubenswrapper[31456]: I0312 21:27:05.601034 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:06.616499 master-0 kubenswrapper[31456]: I0312 21:27:06.616283 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-6ldcl" event={"ID":"3c8c121d-9b72-44d7-af67-27dd9476ba5e","Type":"ContainerStarted","Data":"4bf13a87501a4e74ea3717dd1bc1cd49dba74e2ca4d5f4858a8ce7b81e02d5ad"} Mar 12 21:27:06.641981 master-0 kubenswrapper[31456]: I0312 21:27:06.641873 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-6ldcl" podStartSLOduration=4.285337947 podStartE2EDuration="8.641846397s" podCreationTimestamp="2026-03-12 21:26:58 +0000 UTC" firstStartedPulling="2026-03-12 21:27:01.369628944 +0000 UTC m=+1082.444234272" lastFinishedPulling="2026-03-12 21:27:05.726137394 +0000 UTC m=+1086.800742722" observedRunningTime="2026-03-12 21:27:06.635564516 +0000 UTC m=+1087.710169844" watchObservedRunningTime="2026-03-12 21:27:06.641846397 +0000 UTC m=+1087.716451725" Mar 12 21:27:09.672841 master-0 kubenswrapper[31456]: I0312 21:27:09.672704 31456 generic.go:334] "Generic (PLEG): container finished" podID="3c8c121d-9b72-44d7-af67-27dd9476ba5e" containerID="4bf13a87501a4e74ea3717dd1bc1cd49dba74e2ca4d5f4858a8ce7b81e02d5ad" exitCode=0 Mar 12 21:27:09.672841 master-0 kubenswrapper[31456]: I0312 21:27:09.672790 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-6ldcl" event={"ID":"3c8c121d-9b72-44d7-af67-27dd9476ba5e","Type":"ContainerDied","Data":"4bf13a87501a4e74ea3717dd1bc1cd49dba74e2ca4d5f4858a8ce7b81e02d5ad"} Mar 12 21:27:12.701681 master-0 kubenswrapper[31456]: I0312 21:27:12.701473 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:12.702965 master-0 kubenswrapper[31456]: I0312 21:27:12.702906 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86c6bb594-knx75" Mar 12 21:27:15.905847 master-0 kubenswrapper[31456]: I0312 21:27:15.903991 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wc97w"] Mar 12 21:27:15.905847 master-0 kubenswrapper[31456]: I0312 21:27:15.905800 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:15.920128 master-0 kubenswrapper[31456]: I0312 21:27:15.920030 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wc97w"] Mar 12 21:27:16.018957 master-0 kubenswrapper[31456]: I0312 21:27:16.018869 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rhn2f"] Mar 12 21:27:16.020497 master-0 kubenswrapper[31456]: I0312 21:27:16.020460 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.029543 master-0 kubenswrapper[31456]: I0312 21:27:16.029504 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rhn2f"] Mar 12 21:27:16.048014 master-0 kubenswrapper[31456]: I0312 21:27:16.047938 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-operator-scripts\") pod \"nova-api-db-create-wc97w\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.051364 master-0 kubenswrapper[31456]: I0312 21:27:16.048074 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf58t\" (UniqueName: \"kubernetes.io/projected/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-kube-api-access-zf58t\") pod \"nova-api-db-create-wc97w\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.108641 master-0 kubenswrapper[31456]: I0312 21:27:16.108581 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a75d-account-create-update-nch4k"] Mar 12 21:27:16.111254 master-0 kubenswrapper[31456]: I0312 21:27:16.111233 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.116337 master-0 kubenswrapper[31456]: I0312 21:27:16.116284 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 21:27:16.125618 master-0 kubenswrapper[31456]: I0312 21:27:16.125554 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a75d-account-create-update-nch4k"] Mar 12 21:27:16.157585 master-0 kubenswrapper[31456]: I0312 21:27:16.157433 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zb8g\" (UniqueName: \"kubernetes.io/projected/c91b737e-1dc0-4977-8cc3-f36cde0b3031-kube-api-access-5zb8g\") pod \"nova-cell0-db-create-rhn2f\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.157974 master-0 kubenswrapper[31456]: I0312 21:27:16.157612 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-operator-scripts\") pod \"nova-api-db-create-wc97w\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.157974 master-0 kubenswrapper[31456]: I0312 21:27:16.157738 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf58t\" (UniqueName: \"kubernetes.io/projected/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-kube-api-access-zf58t\") pod \"nova-api-db-create-wc97w\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.157974 master-0 kubenswrapper[31456]: I0312 21:27:16.157880 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91b737e-1dc0-4977-8cc3-f36cde0b3031-operator-scripts\") pod \"nova-cell0-db-create-rhn2f\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.160663 master-0 kubenswrapper[31456]: I0312 21:27:16.159294 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-operator-scripts\") pod \"nova-api-db-create-wc97w\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.179017 master-0 kubenswrapper[31456]: I0312 21:27:16.177142 31456 scope.go:117] "RemoveContainer" containerID="25d851932f76492f77ec540e5fc888b98b753c017635d0122b5840c63be8e055" Mar 12 21:27:16.218040 master-0 kubenswrapper[31456]: I0312 21:27:16.204914 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zgqpq"] Mar 12 21:27:16.218040 master-0 kubenswrapper[31456]: I0312 21:27:16.207446 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.219641 master-0 kubenswrapper[31456]: I0312 21:27:16.219613 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf58t\" (UniqueName: \"kubernetes.io/projected/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-kube-api-access-zf58t\") pod \"nova-api-db-create-wc97w\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.242457 master-0 kubenswrapper[31456]: I0312 21:27:16.242371 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zgqpq"] Mar 12 21:27:16.262840 master-0 kubenswrapper[31456]: I0312 21:27:16.262550 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91b737e-1dc0-4977-8cc3-f36cde0b3031-operator-scripts\") pod \"nova-cell0-db-create-rhn2f\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.262840 master-0 kubenswrapper[31456]: I0312 21:27:16.262631 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zb8g\" (UniqueName: \"kubernetes.io/projected/c91b737e-1dc0-4977-8cc3-f36cde0b3031-kube-api-access-5zb8g\") pod \"nova-cell0-db-create-rhn2f\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.262840 master-0 kubenswrapper[31456]: I0312 21:27:16.262733 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmt79\" (UniqueName: \"kubernetes.io/projected/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-kube-api-access-vmt79\") pod \"nova-api-a75d-account-create-update-nch4k\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.262840 master-0 kubenswrapper[31456]: I0312 21:27:16.262771 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-operator-scripts\") pod \"nova-api-a75d-account-create-update-nch4k\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.265607 master-0 kubenswrapper[31456]: I0312 21:27:16.265386 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91b737e-1dc0-4977-8cc3-f36cde0b3031-operator-scripts\") pod \"nova-cell0-db-create-rhn2f\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.292365 master-0 kubenswrapper[31456]: I0312 21:27:16.292323 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zb8g\" (UniqueName: \"kubernetes.io/projected/c91b737e-1dc0-4977-8cc3-f36cde0b3031-kube-api-access-5zb8g\") pod \"nova-cell0-db-create-rhn2f\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.298059 master-0 kubenswrapper[31456]: I0312 21:27:16.297021 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:16.306820 master-0 kubenswrapper[31456]: I0312 21:27:16.306741 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5fda-account-create-update-jj52w"] Mar 12 21:27:16.309240 master-0 kubenswrapper[31456]: I0312 21:27:16.309200 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.315602 master-0 kubenswrapper[31456]: I0312 21:27:16.315574 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 21:27:16.328323 master-0 kubenswrapper[31456]: I0312 21:27:16.325956 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fda-account-create-update-jj52w"] Mar 12 21:27:16.346760 master-0 kubenswrapper[31456]: I0312 21:27:16.346655 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:16.371533 master-0 kubenswrapper[31456]: I0312 21:27:16.371467 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56b88bd7-c930-40cf-ab94-806f32d82a96-operator-scripts\") pod \"nova-cell1-db-create-zgqpq\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.371695 master-0 kubenswrapper[31456]: I0312 21:27:16.371564 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmt79\" (UniqueName: \"kubernetes.io/projected/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-kube-api-access-vmt79\") pod \"nova-api-a75d-account-create-update-nch4k\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.371695 master-0 kubenswrapper[31456]: I0312 21:27:16.371600 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-operator-scripts\") pod \"nova-api-a75d-account-create-update-nch4k\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.371695 master-0 kubenswrapper[31456]: I0312 21:27:16.371678 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d449q\" (UniqueName: \"kubernetes.io/projected/56b88bd7-c930-40cf-ab94-806f32d82a96-kube-api-access-d449q\") pod \"nova-cell1-db-create-zgqpq\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.378008 master-0 kubenswrapper[31456]: I0312 21:27:16.377942 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-operator-scripts\") pod \"nova-api-a75d-account-create-update-nch4k\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.388877 master-0 kubenswrapper[31456]: I0312 21:27:16.388826 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmt79\" (UniqueName: \"kubernetes.io/projected/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-kube-api-access-vmt79\") pod \"nova-api-a75d-account-create-update-nch4k\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.437577 master-0 kubenswrapper[31456]: I0312 21:27:16.437171 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:16.487831 master-0 kubenswrapper[31456]: I0312 21:27:16.476331 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56b88bd7-c930-40cf-ab94-806f32d82a96-operator-scripts\") pod \"nova-cell1-db-create-zgqpq\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.487831 master-0 kubenswrapper[31456]: I0312 21:27:16.476496 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d449q\" (UniqueName: \"kubernetes.io/projected/56b88bd7-c930-40cf-ab94-806f32d82a96-kube-api-access-d449q\") pod \"nova-cell1-db-create-zgqpq\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.487831 master-0 kubenswrapper[31456]: I0312 21:27:16.476525 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94csr\" (UniqueName: \"kubernetes.io/projected/31856960-9d64-482a-b18d-3cb7ebc781d7-kube-api-access-94csr\") pod \"nova-cell0-5fda-account-create-update-jj52w\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.487831 master-0 kubenswrapper[31456]: I0312 21:27:16.476612 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31856960-9d64-482a-b18d-3cb7ebc781d7-operator-scripts\") pod \"nova-cell0-5fda-account-create-update-jj52w\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.487831 master-0 kubenswrapper[31456]: I0312 21:27:16.477170 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56b88bd7-c930-40cf-ab94-806f32d82a96-operator-scripts\") pod \"nova-cell1-db-create-zgqpq\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.498856 master-0 kubenswrapper[31456]: I0312 21:27:16.491414 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1f7d-account-create-update-ckqfv"] Mar 12 21:27:16.498856 master-0 kubenswrapper[31456]: I0312 21:27:16.492979 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.498856 master-0 kubenswrapper[31456]: I0312 21:27:16.495616 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 21:27:16.507783 master-0 kubenswrapper[31456]: I0312 21:27:16.507480 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d449q\" (UniqueName: \"kubernetes.io/projected/56b88bd7-c930-40cf-ab94-806f32d82a96-kube-api-access-d449q\") pod \"nova-cell1-db-create-zgqpq\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.511612 master-0 kubenswrapper[31456]: I0312 21:27:16.511554 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1f7d-account-create-update-ckqfv"] Mar 12 21:27:16.578286 master-0 kubenswrapper[31456]: I0312 21:27:16.578239 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31856960-9d64-482a-b18d-3cb7ebc781d7-operator-scripts\") pod \"nova-cell0-5fda-account-create-update-jj52w\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.578377 master-0 kubenswrapper[31456]: I0312 21:27:16.578340 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-operator-scripts\") pod \"nova-cell1-1f7d-account-create-update-ckqfv\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.578415 master-0 kubenswrapper[31456]: I0312 21:27:16.578374 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4fdk\" (UniqueName: \"kubernetes.io/projected/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-kube-api-access-v4fdk\") pod \"nova-cell1-1f7d-account-create-update-ckqfv\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.578522 master-0 kubenswrapper[31456]: I0312 21:27:16.578494 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94csr\" (UniqueName: \"kubernetes.io/projected/31856960-9d64-482a-b18d-3cb7ebc781d7-kube-api-access-94csr\") pod \"nova-cell0-5fda-account-create-update-jj52w\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.579655 master-0 kubenswrapper[31456]: I0312 21:27:16.579624 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31856960-9d64-482a-b18d-3cb7ebc781d7-operator-scripts\") pod \"nova-cell0-5fda-account-create-update-jj52w\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.581391 master-0 kubenswrapper[31456]: I0312 21:27:16.581356 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:16.595406 master-0 kubenswrapper[31456]: I0312 21:27:16.595362 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94csr\" (UniqueName: \"kubernetes.io/projected/31856960-9d64-482a-b18d-3cb7ebc781d7-kube-api-access-94csr\") pod \"nova-cell0-5fda-account-create-update-jj52w\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.638233 master-0 kubenswrapper[31456]: I0312 21:27:16.638201 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:27:16.680234 master-0 kubenswrapper[31456]: I0312 21:27:16.680119 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-operator-scripts\") pod \"nova-cell1-1f7d-account-create-update-ckqfv\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.680234 master-0 kubenswrapper[31456]: I0312 21:27:16.680180 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4fdk\" (UniqueName: \"kubernetes.io/projected/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-kube-api-access-v4fdk\") pod \"nova-cell1-1f7d-account-create-update-ckqfv\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.681184 master-0 kubenswrapper[31456]: I0312 21:27:16.681143 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-operator-scripts\") pod \"nova-cell1-1f7d-account-create-update-ckqfv\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.688472 master-0 kubenswrapper[31456]: I0312 21:27:16.688200 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:16.702177 master-0 kubenswrapper[31456]: I0312 21:27:16.702129 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4fdk\" (UniqueName: \"kubernetes.io/projected/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-kube-api-access-v4fdk\") pod \"nova-cell1-1f7d-account-create-update-ckqfv\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781269 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-combined-ca-bundle\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781336 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781426 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-config\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781595 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781654 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kcp\" (UniqueName: \"kubernetes.io/projected/3c8c121d-9b72-44d7-af67-27dd9476ba5e-kube-api-access-55kcp\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781676 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-scripts\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.781834 master-0 kubenswrapper[31456]: I0312 21:27:16.781765 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c8c121d-9b72-44d7-af67-27dd9476ba5e-etc-podinfo\") pod \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\" (UID: \"3c8c121d-9b72-44d7-af67-27dd9476ba5e\") " Mar 12 21:27:16.786267 master-0 kubenswrapper[31456]: I0312 21:27:16.786004 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:16.786947 master-0 kubenswrapper[31456]: I0312 21:27:16.786562 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:16.786947 master-0 kubenswrapper[31456]: I0312 21:27:16.786896 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8c121d-9b72-44d7-af67-27dd9476ba5e-kube-api-access-55kcp" (OuterVolumeSpecName: "kube-api-access-55kcp") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "kube-api-access-55kcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:16.791245 master-0 kubenswrapper[31456]: I0312 21:27:16.791001 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3c8c121d-9b72-44d7-af67-27dd9476ba5e-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 21:27:16.803096 master-0 kubenswrapper[31456]: I0312 21:27:16.802993 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-scripts" (OuterVolumeSpecName: "scripts") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:16.844837 master-0 kubenswrapper[31456]: I0312 21:27:16.836513 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-config" (OuterVolumeSpecName: "config") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:16.844837 master-0 kubenswrapper[31456]: I0312 21:27:16.839351 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c8c121d-9b72-44d7-af67-27dd9476ba5e" (UID: "3c8c121d-9b72-44d7-af67-27dd9476ba5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:16.860127 master-0 kubenswrapper[31456]: I0312 21:27:16.853582 31456 generic.go:334] "Generic (PLEG): container finished" podID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerID="f4a0172384f033272e2a0a23a455d0f73b3a58630e7b76c5147f00a0b1cb6fe8" exitCode=0 Mar 12 21:27:16.860127 master-0 kubenswrapper[31456]: I0312 21:27:16.853669 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b7fc99fd8-pc4wq" event={"ID":"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0","Type":"ContainerDied","Data":"f4a0172384f033272e2a0a23a455d0f73b3a58630e7b76c5147f00a0b1cb6fe8"} Mar 12 21:27:16.865831 master-0 kubenswrapper[31456]: I0312 21:27:16.860594 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-6ldcl" Mar 12 21:27:16.865831 master-0 kubenswrapper[31456]: I0312 21:27:16.860619 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-6ldcl" event={"ID":"3c8c121d-9b72-44d7-af67-27dd9476ba5e","Type":"ContainerDied","Data":"9117748ddfab8fe9d7f7bd15e5409a14b72d296e554a2eece1d90d2b4b2e2444"} Mar 12 21:27:16.865831 master-0 kubenswrapper[31456]: I0312 21:27:16.860680 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9117748ddfab8fe9d7f7bd15e5409a14b72d296e554a2eece1d90d2b4b2e2444" Mar 12 21:27:16.882766 master-0 kubenswrapper[31456]: I0312 21:27:16.882681 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888121 31456 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888151 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55kcp\" (UniqueName: \"kubernetes.io/projected/3c8c121d-9b72-44d7-af67-27dd9476ba5e-kube-api-access-55kcp\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888177 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888194 31456 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c8c121d-9b72-44d7-af67-27dd9476ba5e-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888206 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888220 31456 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3c8c121d-9b72-44d7-af67-27dd9476ba5e-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:16.893856 master-0 kubenswrapper[31456]: I0312 21:27:16.888240 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3c8c121d-9b72-44d7-af67-27dd9476ba5e-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:18.004062 master-0 kubenswrapper[31456]: I0312 21:27:18.003968 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:27:18.010601 master-0 kubenswrapper[31456]: I0312 21:27:18.010527 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7df6b6dd9d-tfn65" Mar 12 21:27:18.194008 master-0 kubenswrapper[31456]: I0312 21:27:18.191743 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c76b45676-rfhd9"] Mar 12 21:27:18.194008 master-0 kubenswrapper[31456]: I0312 21:27:18.192050 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-c76b45676-rfhd9" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-log" containerID="cri-o://3f751c249ba0054b38eacd67b6f5916bd4354af3bd74b44420200444714551c9" gracePeriod=30 Mar 12 21:27:18.200737 master-0 kubenswrapper[31456]: I0312 21:27:18.196995 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-c76b45676-rfhd9" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-api" containerID="cri-o://1f48d28ae3c63c4e8d566362287a7917c8e5cd496a34ef1f771eb022fd9c7ae7" gracePeriod=30 Mar 12 21:27:18.827470 master-0 kubenswrapper[31456]: I0312 21:27:18.827261 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56cf4b4989-2cwl5"] Mar 12 21:27:18.827944 master-0 kubenswrapper[31456]: E0312 21:27:18.827831 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8c121d-9b72-44d7-af67-27dd9476ba5e" containerName="ironic-inspector-db-sync" Mar 12 21:27:18.827944 master-0 kubenswrapper[31456]: I0312 21:27:18.827852 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8c121d-9b72-44d7-af67-27dd9476ba5e" containerName="ironic-inspector-db-sync" Mar 12 21:27:18.830131 master-0 kubenswrapper[31456]: I0312 21:27:18.828151 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8c121d-9b72-44d7-af67-27dd9476ba5e" containerName="ironic-inspector-db-sync" Mar 12 21:27:18.830226 master-0 kubenswrapper[31456]: I0312 21:27:18.830154 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.899485 master-0 kubenswrapper[31456]: I0312 21:27:18.898796 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56cf4b4989-2cwl5"] Mar 12 21:27:18.934478 master-0 kubenswrapper[31456]: I0312 21:27:18.934419 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:18.947465 master-0 kubenswrapper[31456]: I0312 21:27:18.943969 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 21:27:18.947465 master-0 kubenswrapper[31456]: I0312 21:27:18.947193 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 12 21:27:18.947465 master-0 kubenswrapper[31456]: I0312 21:27:18.947309 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 21:27:18.947746 master-0 kubenswrapper[31456]: I0312 21:27:18.947603 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 21:27:18.964624 master-0 kubenswrapper[31456]: I0312 21:27:18.959609 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppr99\" (UniqueName: \"kubernetes.io/projected/b41a87ae-50a2-4490-891e-99a17d655797-kube-api-access-ppr99\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.965315 master-0 kubenswrapper[31456]: I0312 21:27:18.965280 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-sb\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.965467 master-0 kubenswrapper[31456]: I0312 21:27:18.965451 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-nb\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.965628 master-0 kubenswrapper[31456]: I0312 21:27:18.965609 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-swift-storage-0\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.968173 master-0 kubenswrapper[31456]: I0312 21:27:18.968129 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-svc\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.968790 master-0 kubenswrapper[31456]: I0312 21:27:18.968773 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-config\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:18.992861 master-0 kubenswrapper[31456]: I0312 21:27:18.992404 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:19.071171 master-0 kubenswrapper[31456]: I0312 21:27:19.071115 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-config\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071233 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppr99\" (UniqueName: \"kubernetes.io/projected/b41a87ae-50a2-4490-891e-99a17d655797-kube-api-access-ppr99\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071265 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071288 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df574917-39ab-4063-ab80-d42902865c20-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071305 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-scripts\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071328 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnd6k\" (UniqueName: \"kubernetes.io/projected/df574917-39ab-4063-ab80-d42902865c20-kube-api-access-wnd6k\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071536 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-sb\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071590 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-nb\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071629 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-swift-storage-0\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071691 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.071728 master-0 kubenswrapper[31456]: I0312 21:27:19.071722 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.072104 master-0 kubenswrapper[31456]: I0312 21:27:19.071743 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-svc\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.072104 master-0 kubenswrapper[31456]: I0312 21:27:19.071805 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-config\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.079233 master-0 kubenswrapper[31456]: I0312 21:27:19.077772 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-config\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.079366 master-0 kubenswrapper[31456]: I0312 21:27:19.079333 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-swift-storage-0\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.082781 master-0 kubenswrapper[31456]: I0312 21:27:19.082756 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-sb\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.085729 master-0 kubenswrapper[31456]: I0312 21:27:19.084340 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-svc\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.085958 master-0 kubenswrapper[31456]: I0312 21:27:19.085921 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-nb\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.096390 master-0 kubenswrapper[31456]: I0312 21:27:19.096347 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppr99\" (UniqueName: \"kubernetes.io/projected/b41a87ae-50a2-4490-891e-99a17d655797-kube-api-access-ppr99\") pod \"dnsmasq-dns-56cf4b4989-2cwl5\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177244 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177330 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-config\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177424 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177444 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df574917-39ab-4063-ab80-d42902865c20-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177461 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-scripts\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177481 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnd6k\" (UniqueName: \"kubernetes.io/projected/df574917-39ab-4063-ab80-d42902865c20-kube-api-access-wnd6k\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177585 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.177708 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.179929 master-0 kubenswrapper[31456]: I0312 21:27:19.178914 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.180923 master-0 kubenswrapper[31456]: I0312 21:27:19.180900 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.182034 master-0 kubenswrapper[31456]: I0312 21:27:19.182011 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df574917-39ab-4063-ab80-d42902865c20-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.182606 master-0 kubenswrapper[31456]: I0312 21:27:19.182579 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-scripts\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.203014 master-0 kubenswrapper[31456]: I0312 21:27:19.202954 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-config\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.204881 master-0 kubenswrapper[31456]: I0312 21:27:19.204480 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnd6k\" (UniqueName: \"kubernetes.io/projected/df574917-39ab-4063-ab80-d42902865c20-kube-api-access-wnd6k\") pod \"ironic-inspector-0\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:19.215367 master-0 kubenswrapper[31456]: I0312 21:27:19.215073 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:19.281928 master-0 kubenswrapper[31456]: I0312 21:27:19.281840 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 21:27:21.162208 master-0 kubenswrapper[31456]: I0312 21:27:21.162135 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:27:21.162894 master-0 kubenswrapper[31456]: I0312 21:27:21.162395 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-external-api-0" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-log" containerID="cri-o://ba8530f2f78010e06d6c86db3104bf8949d730b429ac2e43b76e663f1b5dddbc" gracePeriod=30 Mar 12 21:27:21.162894 master-0 kubenswrapper[31456]: I0312 21:27:21.162524 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-external-api-0" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-httpd" containerID="cri-o://76119fa19412e6d332c800f93cccb67214a613c3a21e08876e4c96a60312f18b" gracePeriod=30 Mar 12 21:27:21.906183 master-0 kubenswrapper[31456]: I0312 21:27:21.906099 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:22.648896 master-0 kubenswrapper[31456]: I0312 21:27:22.648795 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:27:22.649374 master-0 kubenswrapper[31456]: I0312 21:27:22.649076 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-internal-api-0" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-log" containerID="cri-o://4d3b0e96c1344df5da8bdeecb9531de6467994887ed3979c2ec39258b249f08a" gracePeriod=30 Mar 12 21:27:22.649560 master-0 kubenswrapper[31456]: I0312 21:27:22.649526 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-30e4b-default-internal-api-0" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-httpd" containerID="cri-o://10e54504f9f158d2ff034d14f847a2344e2841dae80b2aedb91058874103c1ad" gracePeriod=30 Mar 12 21:27:23.792355 master-0 kubenswrapper[31456]: I0312 21:27:23.792286 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:27:23.938277 master-0 kubenswrapper[31456]: I0312 21:27:23.938020 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-config\") pod \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " Mar 12 21:27:23.938277 master-0 kubenswrapper[31456]: I0312 21:27:23.938216 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbm25\" (UniqueName: \"kubernetes.io/projected/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-kube-api-access-jbm25\") pod \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " Mar 12 21:27:23.938575 master-0 kubenswrapper[31456]: I0312 21:27:23.938339 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-ovndb-tls-certs\") pod \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " Mar 12 21:27:23.938575 master-0 kubenswrapper[31456]: I0312 21:27:23.938380 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-httpd-config\") pod \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " Mar 12 21:27:23.939050 master-0 kubenswrapper[31456]: I0312 21:27:23.938863 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-combined-ca-bundle\") pod \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\" (UID: \"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0\") " Mar 12 21:27:23.945551 master-0 kubenswrapper[31456]: I0312 21:27:23.945482 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" (UID: "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:23.951048 master-0 kubenswrapper[31456]: I0312 21:27:23.951001 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-kube-api-access-jbm25" (OuterVolumeSpecName: "kube-api-access-jbm25") pod "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" (UID: "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0"). InnerVolumeSpecName "kube-api-access-jbm25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:24.009405 master-0 kubenswrapper[31456]: I0312 21:27:24.009336 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b7fc99fd8-pc4wq" event={"ID":"2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0","Type":"ContainerDied","Data":"36ab01fed5375e4747f7129c6733f87baaa9d5c953918b12de5a57a849675155"} Mar 12 21:27:24.009405 master-0 kubenswrapper[31456]: I0312 21:27:24.009396 31456 scope.go:117] "RemoveContainer" containerID="f4238bf455a2a08c5c82da0b82cba6320522be3626dfeecaf288204b852636a7" Mar 12 21:27:24.009640 master-0 kubenswrapper[31456]: I0312 21:27:24.009527 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b7fc99fd8-pc4wq" Mar 12 21:27:24.011726 master-0 kubenswrapper[31456]: I0312 21:27:24.011540 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-config" (OuterVolumeSpecName: "config") pod "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" (UID: "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.015448 master-0 kubenswrapper[31456]: I0312 21:27:24.015403 31456 generic.go:334] "Generic (PLEG): container finished" podID="35a5b367-8419-4864-9317-7b78c50cad2d" containerID="ba8530f2f78010e06d6c86db3104bf8949d730b429ac2e43b76e663f1b5dddbc" exitCode=143 Mar 12 21:27:24.015518 master-0 kubenswrapper[31456]: I0312 21:27:24.015473 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"35a5b367-8419-4864-9317-7b78c50cad2d","Type":"ContainerDied","Data":"ba8530f2f78010e06d6c86db3104bf8949d730b429ac2e43b76e663f1b5dddbc"} Mar 12 21:27:24.020824 master-0 kubenswrapper[31456]: I0312 21:27:24.020755 31456 generic.go:334] "Generic (PLEG): container finished" podID="a7a5e241-7146-489b-b32b-01218601b895" containerID="4d3b0e96c1344df5da8bdeecb9531de6467994887ed3979c2ec39258b249f08a" exitCode=143 Mar 12 21:27:24.020967 master-0 kubenswrapper[31456]: I0312 21:27:24.020874 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"a7a5e241-7146-489b-b32b-01218601b895","Type":"ContainerDied","Data":"4d3b0e96c1344df5da8bdeecb9531de6467994887ed3979c2ec39258b249f08a"} Mar 12 21:27:24.030865 master-0 kubenswrapper[31456]: I0312 21:27:24.028716 31456 generic.go:334] "Generic (PLEG): container finished" podID="205534d7-c857-4999-8352-af039951ce48" containerID="1f48d28ae3c63c4e8d566362287a7917c8e5cd496a34ef1f771eb022fd9c7ae7" exitCode=0 Mar 12 21:27:24.030865 master-0 kubenswrapper[31456]: I0312 21:27:24.028757 31456 generic.go:334] "Generic (PLEG): container finished" podID="205534d7-c857-4999-8352-af039951ce48" containerID="3f751c249ba0054b38eacd67b6f5916bd4354af3bd74b44420200444714551c9" exitCode=143 Mar 12 21:27:24.030865 master-0 kubenswrapper[31456]: I0312 21:27:24.028781 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c76b45676-rfhd9" event={"ID":"205534d7-c857-4999-8352-af039951ce48","Type":"ContainerDied","Data":"1f48d28ae3c63c4e8d566362287a7917c8e5cd496a34ef1f771eb022fd9c7ae7"} Mar 12 21:27:24.030865 master-0 kubenswrapper[31456]: I0312 21:27:24.028822 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c76b45676-rfhd9" event={"ID":"205534d7-c857-4999-8352-af039951ce48","Type":"ContainerDied","Data":"3f751c249ba0054b38eacd67b6f5916bd4354af3bd74b44420200444714551c9"} Mar 12 21:27:24.041704 master-0 kubenswrapper[31456]: I0312 21:27:24.041496 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.041704 master-0 kubenswrapper[31456]: I0312 21:27:24.041536 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.041704 master-0 kubenswrapper[31456]: I0312 21:27:24.041547 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbm25\" (UniqueName: \"kubernetes.io/projected/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-kube-api-access-jbm25\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.055939 master-0 kubenswrapper[31456]: I0312 21:27:24.055858 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" (UID: "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.075742 master-0 kubenswrapper[31456]: I0312 21:27:24.075641 31456 scope.go:117] "RemoveContainer" containerID="f4a0172384f033272e2a0a23a455d0f73b3a58630e7b76c5147f00a0b1cb6fe8" Mar 12 21:27:24.144374 master-0 kubenswrapper[31456]: I0312 21:27:24.143231 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.209629 master-0 kubenswrapper[31456]: I0312 21:27:24.209573 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" (UID: "2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.246750 master-0 kubenswrapper[31456]: I0312 21:27:24.246119 31456 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.321775 master-0 kubenswrapper[31456]: I0312 21:27:24.321531 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:27:24.411618 master-0 kubenswrapper[31456]: I0312 21:27:24.409172 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b7fc99fd8-pc4wq"] Mar 12 21:27:24.423094 master-0 kubenswrapper[31456]: I0312 21:27:24.422961 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b7fc99fd8-pc4wq"] Mar 12 21:27:24.451893 master-0 kubenswrapper[31456]: I0312 21:27:24.451445 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/205534d7-c857-4999-8352-af039951ce48-logs\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.451893 master-0 kubenswrapper[31456]: I0312 21:27:24.451694 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-public-tls-certs\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.451893 master-0 kubenswrapper[31456]: I0312 21:27:24.451732 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-scripts\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.452429 master-0 kubenswrapper[31456]: I0312 21:27:24.452365 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-config-data\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.452602 master-0 kubenswrapper[31456]: I0312 21:27:24.452517 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-combined-ca-bundle\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.453431 master-0 kubenswrapper[31456]: I0312 21:27:24.453326 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205534d7-c857-4999-8352-af039951ce48-logs" (OuterVolumeSpecName: "logs") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:24.454747 master-0 kubenswrapper[31456]: I0312 21:27:24.453397 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-internal-tls-certs\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.455052 master-0 kubenswrapper[31456]: I0312 21:27:24.454991 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6fxh\" (UniqueName: \"kubernetes.io/projected/205534d7-c857-4999-8352-af039951ce48-kube-api-access-d6fxh\") pod \"205534d7-c857-4999-8352-af039951ce48\" (UID: \"205534d7-c857-4999-8352-af039951ce48\") " Mar 12 21:27:24.456473 master-0 kubenswrapper[31456]: I0312 21:27:24.456425 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/205534d7-c857-4999-8352-af039951ce48-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.460183 master-0 kubenswrapper[31456]: I0312 21:27:24.459771 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205534d7-c857-4999-8352-af039951ce48-kube-api-access-d6fxh" (OuterVolumeSpecName: "kube-api-access-d6fxh") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "kube-api-access-d6fxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:24.466108 master-0 kubenswrapper[31456]: I0312 21:27:24.465892 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-scripts" (OuterVolumeSpecName: "scripts") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.467230 master-0 kubenswrapper[31456]: I0312 21:27:24.467172 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a75d-account-create-update-nch4k"] Mar 12 21:27:24.508424 master-0 kubenswrapper[31456]: W0312 21:27:24.508363 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd45bf4_fd4f_4229_a6b9_d0433a367ee8.slice/crio-d6d114c57c698150a888fcc48e2dd5eea822a973b9fb81b601e35320e220e51c WatchSource:0}: Error finding container d6d114c57c698150a888fcc48e2dd5eea822a973b9fb81b601e35320e220e51c: Status 404 returned error can't find the container with id d6d114c57c698150a888fcc48e2dd5eea822a973b9fb81b601e35320e220e51c Mar 12 21:27:24.549481 master-0 kubenswrapper[31456]: I0312 21:27:24.549422 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.559221 master-0 kubenswrapper[31456]: I0312 21:27:24.559067 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6fxh\" (UniqueName: \"kubernetes.io/projected/205534d7-c857-4999-8352-af039951ce48-kube-api-access-d6fxh\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.559221 master-0 kubenswrapper[31456]: I0312 21:27:24.559124 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.559221 master-0 kubenswrapper[31456]: I0312 21:27:24.559139 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.564260 master-0 kubenswrapper[31456]: I0312 21:27:24.563919 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-config-data" (OuterVolumeSpecName: "config-data") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.641484 master-0 kubenswrapper[31456]: I0312 21:27:24.641352 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.656858 master-0 kubenswrapper[31456]: I0312 21:27:24.656787 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "205534d7-c857-4999-8352-af039951ce48" (UID: "205534d7-c857-4999-8352-af039951ce48"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:24.661987 master-0 kubenswrapper[31456]: I0312 21:27:24.661925 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.661987 master-0 kubenswrapper[31456]: I0312 21:27:24.661984 31456 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.662140 master-0 kubenswrapper[31456]: I0312 21:27:24.661996 31456 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/205534d7-c857-4999-8352-af039951ce48-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:24.958744 master-0 kubenswrapper[31456]: I0312 21:27:24.958691 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rhn2f"] Mar 12 21:27:24.978371 master-0 kubenswrapper[31456]: I0312 21:27:24.978313 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zgqpq"] Mar 12 21:27:24.990755 master-0 kubenswrapper[31456]: I0312 21:27:24.990524 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wc97w"] Mar 12 21:27:25.043478 master-0 kubenswrapper[31456]: I0312 21:27:25.042559 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" event={"ID":"33f0319b-6d84-4282-bbb5-9636e1b62647","Type":"ContainerStarted","Data":"11f2142b82ce6ec9c663db11809b4a2cfb744b71212b07fde23cf06d7cc33203"} Mar 12 21:27:25.043478 master-0 kubenswrapper[31456]: I0312 21:27:25.042846 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:27:25.046271 master-0 kubenswrapper[31456]: I0312 21:27:25.046014 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a75d-account-create-update-nch4k" event={"ID":"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8","Type":"ContainerStarted","Data":"8b1f679137a96cf60d0b2a750272ebf69d19692428d2055b689c377d1d2cfa68"} Mar 12 21:27:25.046271 master-0 kubenswrapper[31456]: I0312 21:27:25.046062 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a75d-account-create-update-nch4k" event={"ID":"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8","Type":"ContainerStarted","Data":"d6d114c57c698150a888fcc48e2dd5eea822a973b9fb81b601e35320e220e51c"} Mar 12 21:27:25.059928 master-0 kubenswrapper[31456]: I0312 21:27:25.059672 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"50c9f3835a6f0727444b947fd7f2674bfa02781d2dad8b09e8ecf8c77c1e0daf"} Mar 12 21:27:25.062080 master-0 kubenswrapper[31456]: I0312 21:27:25.062035 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wc97w" event={"ID":"1b7fef8e-4472-45c9-9824-4a897ff1b1e3","Type":"ContainerStarted","Data":"fd605a896442dba76b0f526abe6af929f7f2bd9e6dbd5762968629d25a41036e"} Mar 12 21:27:25.073692 master-0 kubenswrapper[31456]: I0312 21:27:25.068264 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c76b45676-rfhd9" event={"ID":"205534d7-c857-4999-8352-af039951ce48","Type":"ContainerDied","Data":"94624f48e9d67803e576bcdc7e65a35641d2c577953fe412df0a6befc3c33816"} Mar 12 21:27:25.073692 master-0 kubenswrapper[31456]: I0312 21:27:25.068322 31456 scope.go:117] "RemoveContainer" containerID="1f48d28ae3c63c4e8d566362287a7917c8e5cd496a34ef1f771eb022fd9c7ae7" Mar 12 21:27:25.073692 master-0 kubenswrapper[31456]: I0312 21:27:25.068475 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c76b45676-rfhd9" Mar 12 21:27:25.077616 master-0 kubenswrapper[31456]: I0312 21:27:25.077500 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhn2f" event={"ID":"c91b737e-1dc0-4977-8cc3-f36cde0b3031","Type":"ContainerStarted","Data":"a8b71217dd3a34d70eaee1de325f6d4334ecc549a9bc83ef3550c82a4ede8cd9"} Mar 12 21:27:25.091284 master-0 kubenswrapper[31456]: I0312 21:27:25.083961 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"80ad53ea-17b7-4691-a8dc-865ebf143679","Type":"ContainerStarted","Data":"a12dd702287a34af83cf6a56d8cee4649f0ca4a8ca4a3d9878ecb1bf63b91b31"} Mar 12 21:27:25.107511 master-0 kubenswrapper[31456]: I0312 21:27:25.106966 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a75d-account-create-update-nch4k" podStartSLOduration=9.106947368 podStartE2EDuration="9.106947368s" podCreationTimestamp="2026-03-12 21:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:25.087624651 +0000 UTC m=+1106.162229979" watchObservedRunningTime="2026-03-12 21:27:25.106947368 +0000 UTC m=+1106.181552696" Mar 12 21:27:25.112453 master-0 kubenswrapper[31456]: I0312 21:27:25.112392 31456 generic.go:334] "Generic (PLEG): container finished" podID="35a5b367-8419-4864-9317-7b78c50cad2d" containerID="76119fa19412e6d332c800f93cccb67214a613c3a21e08876e4c96a60312f18b" exitCode=0 Mar 12 21:27:25.112558 master-0 kubenswrapper[31456]: I0312 21:27:25.112458 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"35a5b367-8419-4864-9317-7b78c50cad2d","Type":"ContainerDied","Data":"76119fa19412e6d332c800f93cccb67214a613c3a21e08876e4c96a60312f18b"} Mar 12 21:27:25.117404 master-0 kubenswrapper[31456]: I0312 21:27:25.117330 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zgqpq" event={"ID":"56b88bd7-c930-40cf-ab94-806f32d82a96","Type":"ContainerStarted","Data":"e22bebe7af02080316d7e08fcbd1a25fbe699f5c48713bb76a253c39dd883ce3"} Mar 12 21:27:25.132441 master-0 kubenswrapper[31456]: I0312 21:27:25.123240 31456 scope.go:117] "RemoveContainer" containerID="3f751c249ba0054b38eacd67b6f5916bd4354af3bd74b44420200444714551c9" Mar 12 21:27:25.148349 master-0 kubenswrapper[31456]: I0312 21:27:25.147926 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.833876374 podStartE2EDuration="27.14790867s" podCreationTimestamp="2026-03-12 21:26:58 +0000 UTC" firstStartedPulling="2026-03-12 21:27:01.56201793 +0000 UTC m=+1082.636623258" lastFinishedPulling="2026-03-12 21:27:23.876050226 +0000 UTC m=+1104.950655554" observedRunningTime="2026-03-12 21:27:25.14585513 +0000 UTC m=+1106.220460458" watchObservedRunningTime="2026-03-12 21:27:25.14790867 +0000 UTC m=+1106.222513998" Mar 12 21:27:25.270760 master-0 kubenswrapper[31456]: I0312 21:27:25.269326 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" path="/var/lib/kubelet/pods/2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0/volumes" Mar 12 21:27:25.271308 master-0 kubenswrapper[31456]: I0312 21:27:25.271224 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c76b45676-rfhd9"] Mar 12 21:27:25.271308 master-0 kubenswrapper[31456]: I0312 21:27:25.271259 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c76b45676-rfhd9"] Mar 12 21:27:25.402921 master-0 kubenswrapper[31456]: I0312 21:27:25.402878 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:25.454456 master-0 kubenswrapper[31456]: I0312 21:27:25.454283 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-config-data\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.454456 master-0 kubenswrapper[31456]: I0312 21:27:25.454362 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-logs\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.454685 master-0 kubenswrapper[31456]: I0312 21:27:25.454506 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-httpd-run\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.454685 master-0 kubenswrapper[31456]: I0312 21:27:25.454537 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-scripts\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.454685 master-0 kubenswrapper[31456]: I0312 21:27:25.454611 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-combined-ca-bundle\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.454685 master-0 kubenswrapper[31456]: I0312 21:27:25.454639 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-public-tls-certs\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.454884 master-0 kubenswrapper[31456]: I0312 21:27:25.454760 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82rzv\" (UniqueName: \"kubernetes.io/projected/35a5b367-8419-4864-9317-7b78c50cad2d-kube-api-access-82rzv\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.455733 master-0 kubenswrapper[31456]: I0312 21:27:25.455679 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:25.458291 master-0 kubenswrapper[31456]: I0312 21:27:25.458068 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"35a5b367-8419-4864-9317-7b78c50cad2d\" (UID: \"35a5b367-8419-4864-9317-7b78c50cad2d\") " Mar 12 21:27:25.458654 master-0 kubenswrapper[31456]: I0312 21:27:25.458628 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:25.459859 master-0 kubenswrapper[31456]: I0312 21:27:25.459771 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-logs" (OuterVolumeSpecName: "logs") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:25.479166 master-0 kubenswrapper[31456]: I0312 21:27:25.479096 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-scripts" (OuterVolumeSpecName: "scripts") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:25.549084 master-0 kubenswrapper[31456]: W0312 21:27:25.544368 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41a87ae_50a2_4490_891e_99a17d655797.slice/crio-13500e59ddb6b483a91cce9c62893494821032c3cb2effcc83e0ed93798d18a1 WatchSource:0}: Error finding container 13500e59ddb6b483a91cce9c62893494821032c3cb2effcc83e0ed93798d18a1: Status 404 returned error can't find the container with id 13500e59ddb6b483a91cce9c62893494821032c3cb2effcc83e0ed93798d18a1 Mar 12 21:27:25.549084 master-0 kubenswrapper[31456]: I0312 21:27:25.545858 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a5b367-8419-4864-9317-7b78c50cad2d-kube-api-access-82rzv" (OuterVolumeSpecName: "kube-api-access-82rzv") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "kube-api-access-82rzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:25.549084 master-0 kubenswrapper[31456]: I0312 21:27:25.549067 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1f7d-account-create-update-ckqfv"] Mar 12 21:27:25.590622 master-0 kubenswrapper[31456]: I0312 21:27:25.589882 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56cf4b4989-2cwl5"] Mar 12 21:27:25.599386 master-0 kubenswrapper[31456]: I0312 21:27:25.599330 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:25.599386 master-0 kubenswrapper[31456]: I0312 21:27:25.599376 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82rzv\" (UniqueName: \"kubernetes.io/projected/35a5b367-8419-4864-9317-7b78c50cad2d-kube-api-access-82rzv\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:25.599386 master-0 kubenswrapper[31456]: I0312 21:27:25.599388 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a5b367-8419-4864-9317-7b78c50cad2d-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:25.630291 master-0 kubenswrapper[31456]: I0312 21:27:25.630246 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5fda-account-create-update-jj52w"] Mar 12 21:27:25.656792 master-0 kubenswrapper[31456]: I0312 21:27:25.655362 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:25.671169 master-0 kubenswrapper[31456]: I0312 21:27:25.671125 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:25.692700 master-0 kubenswrapper[31456]: I0312 21:27:25.692648 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:25.701976 master-0 kubenswrapper[31456]: I0312 21:27:25.701585 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:25.701976 master-0 kubenswrapper[31456]: I0312 21:27:25.701623 31456 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:25.720595 master-0 kubenswrapper[31456]: I0312 21:27:25.720475 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-config-data" (OuterVolumeSpecName: "config-data") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:25.804227 master-0 kubenswrapper[31456]: I0312 21:27:25.804177 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a5b367-8419-4864-9317-7b78c50cad2d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.151979 master-0 kubenswrapper[31456]: I0312 21:27:26.151910 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"df574917-39ab-4063-ab80-d42902865c20","Type":"ContainerStarted","Data":"27e92e07ee7fdba22f3f4f1f726b6d585c26a429361db4d776557c33413d0a6b"} Mar 12 21:27:26.159724 master-0 kubenswrapper[31456]: I0312 21:27:26.159675 31456 generic.go:334] "Generic (PLEG): container finished" podID="7cd45bf4-fd4f-4229-a6b9-d0433a367ee8" containerID="8b1f679137a96cf60d0b2a750272ebf69d19692428d2055b689c377d1d2cfa68" exitCode=0 Mar 12 21:27:26.159845 master-0 kubenswrapper[31456]: I0312 21:27:26.159749 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a75d-account-create-update-nch4k" event={"ID":"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8","Type":"ContainerDied","Data":"8b1f679137a96cf60d0b2a750272ebf69d19692428d2055b689c377d1d2cfa68"} Mar 12 21:27:26.164408 master-0 kubenswrapper[31456]: I0312 21:27:26.164352 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wc97w" event={"ID":"1b7fef8e-4472-45c9-9824-4a897ff1b1e3","Type":"ContainerStarted","Data":"d885db32ad07e56acb0300a0a07debb39d1a51f333ac6eb255e87b99455cda09"} Mar 12 21:27:26.169721 master-0 kubenswrapper[31456]: I0312 21:27:26.169364 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zgqpq" event={"ID":"56b88bd7-c930-40cf-ab94-806f32d82a96","Type":"ContainerStarted","Data":"5e48cb2baa539a556bfebf2629096e7ff9a705ef46c5b822ac6883b02a8b113b"} Mar 12 21:27:26.171775 master-0 kubenswrapper[31456]: I0312 21:27:26.171548 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhn2f" event={"ID":"c91b737e-1dc0-4977-8cc3-f36cde0b3031","Type":"ContainerStarted","Data":"22f864337ca9a32383fe2a970ed98c5e27b4b369102cc1c4802a58ec89716303"} Mar 12 21:27:26.174980 master-0 kubenswrapper[31456]: I0312 21:27:26.174922 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" event={"ID":"31856960-9d64-482a-b18d-3cb7ebc781d7","Type":"ContainerStarted","Data":"f2ddf16eb1feaefd6527103be4c2a8201c463b443bef8451a2ea9c6ba4c0815a"} Mar 12 21:27:26.226593 master-0 kubenswrapper[31456]: I0312 21:27:26.226491 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" event={"ID":"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b","Type":"ContainerStarted","Data":"4cf4f93edc7af97b8f2cb5f7e9a8505710116804fb78a872e83c9f8235d13ce6"} Mar 12 21:27:26.237382 master-0 kubenswrapper[31456]: I0312 21:27:26.237278 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"35a5b367-8419-4864-9317-7b78c50cad2d","Type":"ContainerDied","Data":"77a709320345d7e7d74705720966cf45a4deb79fbb79f5916f2f9e376025b471"} Mar 12 21:27:26.237382 master-0 kubenswrapper[31456]: I0312 21:27:26.237338 31456 scope.go:117] "RemoveContainer" containerID="76119fa19412e6d332c800f93cccb67214a613c3a21e08876e4c96a60312f18b" Mar 12 21:27:26.237608 master-0 kubenswrapper[31456]: I0312 21:27:26.237573 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:26.243074 master-0 kubenswrapper[31456]: I0312 21:27:26.243022 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" event={"ID":"b41a87ae-50a2-4490-891e-99a17d655797","Type":"ContainerStarted","Data":"13500e59ddb6b483a91cce9c62893494821032c3cb2effcc83e0ed93798d18a1"} Mar 12 21:27:26.247896 master-0 kubenswrapper[31456]: I0312 21:27:26.247548 31456 generic.go:334] "Generic (PLEG): container finished" podID="a7a5e241-7146-489b-b32b-01218601b895" containerID="10e54504f9f158d2ff034d14f847a2344e2841dae80b2aedb91058874103c1ad" exitCode=0 Mar 12 21:27:26.247896 master-0 kubenswrapper[31456]: I0312 21:27:26.247776 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"a7a5e241-7146-489b-b32b-01218601b895","Type":"ContainerDied","Data":"10e54504f9f158d2ff034d14f847a2344e2841dae80b2aedb91058874103c1ad"} Mar 12 21:27:26.251638 master-0 kubenswrapper[31456]: I0312 21:27:26.251553 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-wc97w" podStartSLOduration=11.25152564 podStartE2EDuration="11.25152564s" podCreationTimestamp="2026-03-12 21:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:26.234261802 +0000 UTC m=+1107.308867130" watchObservedRunningTime="2026-03-12 21:27:26.25152564 +0000 UTC m=+1107.326130968" Mar 12 21:27:26.303217 master-0 kubenswrapper[31456]: I0312 21:27:26.302921 31456 scope.go:117] "RemoveContainer" containerID="ba8530f2f78010e06d6c86db3104bf8949d730b429ac2e43b76e663f1b5dddbc" Mar 12 21:27:26.307164 master-0 kubenswrapper[31456]: I0312 21:27:26.307109 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-zgqpq" podStartSLOduration=10.307095735 podStartE2EDuration="10.307095735s" podCreationTimestamp="2026-03-12 21:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:26.247674107 +0000 UTC m=+1107.322279435" watchObservedRunningTime="2026-03-12 21:27:26.307095735 +0000 UTC m=+1107.381701063" Mar 12 21:27:26.309929 master-0 kubenswrapper[31456]: I0312 21:27:26.308593 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" podStartSLOduration=10.308585642 podStartE2EDuration="10.308585642s" podCreationTimestamp="2026-03-12 21:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:26.264586436 +0000 UTC m=+1107.339191764" watchObservedRunningTime="2026-03-12 21:27:26.308585642 +0000 UTC m=+1107.383190970" Mar 12 21:27:26.343855 master-0 kubenswrapper[31456]: I0312 21:27:26.342405 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-rhn2f" podStartSLOduration=11.34238852 podStartE2EDuration="11.34238852s" podCreationTimestamp="2026-03-12 21:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:26.28001358 +0000 UTC m=+1107.354618898" watchObservedRunningTime="2026-03-12 21:27:26.34238852 +0000 UTC m=+1107.416993848" Mar 12 21:27:26.359209 master-0 kubenswrapper[31456]: I0312 21:27:26.359135 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" podStartSLOduration=10.359117295 podStartE2EDuration="10.359117295s" podCreationTimestamp="2026-03-12 21:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:26.304194184 +0000 UTC m=+1107.378799512" watchObservedRunningTime="2026-03-12 21:27:26.359117295 +0000 UTC m=+1107.433722613" Mar 12 21:27:26.422313 master-0 kubenswrapper[31456]: I0312 21:27:26.422278 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:26.530202 master-0 kubenswrapper[31456]: I0312 21:27:26.529959 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-config-data\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.530202 master-0 kubenswrapper[31456]: I0312 21:27:26.530043 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwj5m\" (UniqueName: \"kubernetes.io/projected/a7a5e241-7146-489b-b32b-01218601b895-kube-api-access-fwj5m\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.530202 master-0 kubenswrapper[31456]: I0312 21:27:26.530164 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-logs\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.530546 master-0 kubenswrapper[31456]: I0312 21:27:26.530331 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-httpd-run\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.530546 master-0 kubenswrapper[31456]: I0312 21:27:26.530386 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-combined-ca-bundle\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.533367 master-0 kubenswrapper[31456]: I0312 21:27:26.533139 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.533367 master-0 kubenswrapper[31456]: I0312 21:27:26.533299 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-scripts\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.533367 master-0 kubenswrapper[31456]: I0312 21:27:26.533322 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-internal-tls-certs\") pod \"a7a5e241-7146-489b-b32b-01218601b895\" (UID: \"a7a5e241-7146-489b-b32b-01218601b895\") " Mar 12 21:27:26.534585 master-0 kubenswrapper[31456]: I0312 21:27:26.534349 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-logs" (OuterVolumeSpecName: "logs") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:26.534585 master-0 kubenswrapper[31456]: I0312 21:27:26.534557 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:26.550863 master-0 kubenswrapper[31456]: I0312 21:27:26.549964 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-scripts" (OuterVolumeSpecName: "scripts") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:26.554057 master-0 kubenswrapper[31456]: I0312 21:27:26.552555 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a5e241-7146-489b-b32b-01218601b895-kube-api-access-fwj5m" (OuterVolumeSpecName: "kube-api-access-fwj5m") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "kube-api-access-fwj5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:26.580196 master-0 kubenswrapper[31456]: I0312 21:27:26.580037 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:26.615801 master-0 kubenswrapper[31456]: I0312 21:27:26.615732 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:26.619008 master-0 kubenswrapper[31456]: E0312 21:27:26.618944 31456 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf574917_39ab_4063_ab80_d42902865c20.slice/crio-53adb381adb1795edc66ca13f74de477894b5f316ade505f65ef47c5308c197a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41a87ae_50a2_4490_891e_99a17d655797.slice/crio-1e23a059cf13a580190b2634ebebf8ccf104e1975300b1347346f6fc4a311d67.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41a87ae_50a2_4490_891e_99a17d655797.slice/crio-conmon-1e23a059cf13a580190b2634ebebf8ccf104e1975300b1347346f6fc4a311d67.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:27:26.636234 master-0 kubenswrapper[31456]: I0312 21:27:26.636176 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwj5m\" (UniqueName: \"kubernetes.io/projected/a7a5e241-7146-489b-b32b-01218601b895-kube-api-access-fwj5m\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.636234 master-0 kubenswrapper[31456]: I0312 21:27:26.636217 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.636234 master-0 kubenswrapper[31456]: I0312 21:27:26.636229 31456 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a5e241-7146-489b-b32b-01218601b895-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.636234 master-0 kubenswrapper[31456]: I0312 21:27:26.636237 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.636234 master-0 kubenswrapper[31456]: I0312 21:27:26.636245 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.636560 master-0 kubenswrapper[31456]: I0312 21:27:26.636253 31456 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:26.662976 master-0 kubenswrapper[31456]: I0312 21:27:26.660310 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-config-data" (OuterVolumeSpecName: "config-data") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:26.738912 master-0 kubenswrapper[31456]: I0312 21:27:26.738849 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a5e241-7146-489b-b32b-01218601b895-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:27.188890 master-0 kubenswrapper[31456]: I0312 21:27:27.188798 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205534d7-c857-4999-8352-af039951ce48" path="/var/lib/kubelet/pods/205534d7-c857-4999-8352-af039951ce48/volumes" Mar 12 21:27:27.273935 master-0 kubenswrapper[31456]: I0312 21:27:27.273884 31456 generic.go:334] "Generic (PLEG): container finished" podID="b41a87ae-50a2-4490-891e-99a17d655797" containerID="1e23a059cf13a580190b2634ebebf8ccf104e1975300b1347346f6fc4a311d67" exitCode=0 Mar 12 21:27:27.274162 master-0 kubenswrapper[31456]: I0312 21:27:27.273966 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" event={"ID":"b41a87ae-50a2-4490-891e-99a17d655797","Type":"ContainerDied","Data":"1e23a059cf13a580190b2634ebebf8ccf104e1975300b1347346f6fc4a311d67"} Mar 12 21:27:27.290079 master-0 kubenswrapper[31456]: I0312 21:27:27.278874 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"a7a5e241-7146-489b-b32b-01218601b895","Type":"ContainerDied","Data":"3022b89911ba17ac12ffeaeb3177cde1d07fde534d7321a8dab8ab76e7c56a59"} Mar 12 21:27:27.290079 master-0 kubenswrapper[31456]: I0312 21:27:27.278954 31456 scope.go:117] "RemoveContainer" containerID="10e54504f9f158d2ff034d14f847a2344e2841dae80b2aedb91058874103c1ad" Mar 12 21:27:27.290079 master-0 kubenswrapper[31456]: I0312 21:27:27.279111 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:27.290079 master-0 kubenswrapper[31456]: I0312 21:27:27.285211 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" event={"ID":"31856960-9d64-482a-b18d-3cb7ebc781d7","Type":"ContainerStarted","Data":"bc43cfb6b29c430500ac0433c7ea2b4e1009a6f4a158a8fcb0c14d149a55d20f"} Mar 12 21:27:27.293233 master-0 kubenswrapper[31456]: I0312 21:27:27.293180 31456 generic.go:334] "Generic (PLEG): container finished" podID="df574917-39ab-4063-ab80-d42902865c20" containerID="53adb381adb1795edc66ca13f74de477894b5f316ade505f65ef47c5308c197a" exitCode=0 Mar 12 21:27:27.293316 master-0 kubenswrapper[31456]: I0312 21:27:27.293257 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"df574917-39ab-4063-ab80-d42902865c20","Type":"ContainerDied","Data":"53adb381adb1795edc66ca13f74de477894b5f316ade505f65ef47c5308c197a"} Mar 12 21:27:27.310555 master-0 kubenswrapper[31456]: I0312 21:27:27.310083 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" event={"ID":"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b","Type":"ContainerStarted","Data":"8c299b5c5048b6f4186078b727f129704bb4bd63f98adf3e6d166328f4d4e11a"} Mar 12 21:27:27.460071 master-0 kubenswrapper[31456]: I0312 21:27:27.456190 31456 scope.go:117] "RemoveContainer" containerID="4d3b0e96c1344df5da8bdeecb9531de6467994887ed3979c2ec39258b249f08a" Mar 12 21:27:28.145748 master-0 kubenswrapper[31456]: I0312 21:27:28.145702 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:28.934303 master-0 kubenswrapper[31456]: I0312 21:27:28.931481 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmt79\" (UniqueName: \"kubernetes.io/projected/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-kube-api-access-vmt79\") pod \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " Mar 12 21:27:28.934303 master-0 kubenswrapper[31456]: I0312 21:27:28.932355 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-operator-scripts\") pod \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\" (UID: \"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8\") " Mar 12 21:27:28.934303 master-0 kubenswrapper[31456]: I0312 21:27:28.933395 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cd45bf4-fd4f-4229-a6b9-d0433a367ee8" (UID: "7cd45bf4-fd4f-4229-a6b9-d0433a367ee8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:28.954267 master-0 kubenswrapper[31456]: I0312 21:27:28.952095 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"df574917-39ab-4063-ab80-d42902865c20","Type":"ContainerDied","Data":"27e92e07ee7fdba22f3f4f1f726b6d585c26a429361db4d776557c33413d0a6b"} Mar 12 21:27:28.954267 master-0 kubenswrapper[31456]: I0312 21:27:28.952145 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27e92e07ee7fdba22f3f4f1f726b6d585c26a429361db4d776557c33413d0a6b" Mar 12 21:27:28.964027 master-0 kubenswrapper[31456]: I0312 21:27:28.963924 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" event={"ID":"b41a87ae-50a2-4490-891e-99a17d655797","Type":"ContainerStarted","Data":"50e6ac7bcddf291caecce1ffc99f56d8309a34ee4b8164f9ec728106f5864497"} Mar 12 21:27:28.964293 master-0 kubenswrapper[31456]: I0312 21:27:28.964216 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:28.968314 master-0 kubenswrapper[31456]: I0312 21:27:28.967513 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a75d-account-create-update-nch4k" event={"ID":"7cd45bf4-fd4f-4229-a6b9-d0433a367ee8","Type":"ContainerDied","Data":"d6d114c57c698150a888fcc48e2dd5eea822a973b9fb81b601e35320e220e51c"} Mar 12 21:27:28.968314 master-0 kubenswrapper[31456]: I0312 21:27:28.967569 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d114c57c698150a888fcc48e2dd5eea822a973b9fb81b601e35320e220e51c" Mar 12 21:27:28.968314 master-0 kubenswrapper[31456]: I0312 21:27:28.967627 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a75d-account-create-update-nch4k" Mar 12 21:27:28.968314 master-0 kubenswrapper[31456]: I0312 21:27:28.967828 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-kube-api-access-vmt79" (OuterVolumeSpecName: "kube-api-access-vmt79") pod "7cd45bf4-fd4f-4229-a6b9-d0433a367ee8" (UID: "7cd45bf4-fd4f-4229-a6b9-d0433a367ee8"). InnerVolumeSpecName "kube-api-access-vmt79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:28.974834 master-0 kubenswrapper[31456]: I0312 21:27:28.973960 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-68659c9b47-m44wq" Mar 12 21:27:28.997620 master-0 kubenswrapper[31456]: I0312 21:27:28.997370 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" podStartSLOduration=10.997351742 podStartE2EDuration="10.997351742s" podCreationTimestamp="2026-03-12 21:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:28.990384883 +0000 UTC m=+1110.064990211" watchObservedRunningTime="2026-03-12 21:27:28.997351742 +0000 UTC m=+1110.071957060" Mar 12 21:27:29.035172 master-0 kubenswrapper[31456]: I0312 21:27:29.035073 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.035172 master-0 kubenswrapper[31456]: I0312 21:27:29.035113 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmt79\" (UniqueName: \"kubernetes.io/projected/7cd45bf4-fd4f-4229-a6b9-d0433a367ee8-kube-api-access-vmt79\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.058889 master-0 kubenswrapper[31456]: I0312 21:27:29.052362 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.135656 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnd6k\" (UniqueName: \"kubernetes.io/projected/df574917-39ab-4063-ab80-d42902865c20-kube-api-access-wnd6k\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.135851 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-scripts\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.135902 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.135939 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.136024 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-combined-ca-bundle\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.136090 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df574917-39ab-4063-ab80-d42902865c20-etc-podinfo\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.136152 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-config\") pod \"df574917-39ab-4063-ab80-d42902865c20\" (UID: \"df574917-39ab-4063-ab80-d42902865c20\") " Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.136369 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:29.136828 master-0 kubenswrapper[31456]: I0312 21:27:29.136381 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:27:29.137355 master-0 kubenswrapper[31456]: I0312 21:27:29.136903 31456 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.137355 master-0 kubenswrapper[31456]: I0312 21:27:29.136921 31456 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/df574917-39ab-4063-ab80-d42902865c20-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.139823 master-0 kubenswrapper[31456]: I0312 21:27:29.139356 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/df574917-39ab-4063-ab80-d42902865c20-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 12 21:27:29.142831 master-0 kubenswrapper[31456]: I0312 21:27:29.141838 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-scripts" (OuterVolumeSpecName: "scripts") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:29.142831 master-0 kubenswrapper[31456]: I0312 21:27:29.142029 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df574917-39ab-4063-ab80-d42902865c20-kube-api-access-wnd6k" (OuterVolumeSpecName: "kube-api-access-wnd6k") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "kube-api-access-wnd6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:29.143989 master-0 kubenswrapper[31456]: I0312 21:27:29.143958 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-config" (OuterVolumeSpecName: "config") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:29.195670 master-0 kubenswrapper[31456]: I0312 21:27:29.194134 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df574917-39ab-4063-ab80-d42902865c20" (UID: "df574917-39ab-4063-ab80-d42902865c20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:27:29.240868 master-0 kubenswrapper[31456]: I0312 21:27:29.240610 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.240868 master-0 kubenswrapper[31456]: I0312 21:27:29.240667 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.240868 master-0 kubenswrapper[31456]: I0312 21:27:29.240684 31456 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df574917-39ab-4063-ab80-d42902865c20-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.240868 master-0 kubenswrapper[31456]: I0312 21:27:29.240698 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df574917-39ab-4063-ab80-d42902865c20-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.240868 master-0 kubenswrapper[31456]: I0312 21:27:29.240712 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnd6k\" (UniqueName: \"kubernetes.io/projected/df574917-39ab-4063-ab80-d42902865c20-kube-api-access-wnd6k\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.344481 master-0 kubenswrapper[31456]: I0312 21:27:29.344423 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555" (OuterVolumeSpecName: "glance") pod "35a5b367-8419-4864-9317-7b78c50cad2d" (UID: "35a5b367-8419-4864-9317-7b78c50cad2d"). InnerVolumeSpecName "pvc-771d56ec-6f7c-4891-8052-556577fed26a". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 21:27:29.383461 master-0 kubenswrapper[31456]: I0312 21:27:29.383406 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9" (OuterVolumeSpecName: "glance") pod "a7a5e241-7146-489b-b32b-01218601b895" (UID: "a7a5e241-7146-489b-b32b-01218601b895"). InnerVolumeSpecName "pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 21:27:29.449580 master-0 kubenswrapper[31456]: I0312 21:27:29.449290 31456 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") on node \"master-0\" " Mar 12 21:27:29.449580 master-0 kubenswrapper[31456]: I0312 21:27:29.449366 31456 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") on node \"master-0\" " Mar 12 21:27:29.502063 master-0 kubenswrapper[31456]: I0312 21:27:29.502020 31456 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 21:27:29.502255 master-0 kubenswrapper[31456]: I0312 21:27:29.502181 31456 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e" (UniqueName: "kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9") on node "master-0" Mar 12 21:27:29.512064 master-0 kubenswrapper[31456]: I0312 21:27:29.512014 31456 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 12 21:27:29.512258 master-0 kubenswrapper[31456]: I0312 21:27:29.512233 31456 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-771d56ec-6f7c-4891-8052-556577fed26a" (UniqueName: "kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555") on node "master-0" Mar 12 21:27:29.562024 master-0 kubenswrapper[31456]: I0312 21:27:29.561694 31456 reconciler_common.go:293] "Volume detached for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.562024 master-0 kubenswrapper[31456]: I0312 21:27:29.561760 31456 reconciler_common.go:293] "Volume detached for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:29.811015 master-0 kubenswrapper[31456]: I0312 21:27:29.810937 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:27:29.824062 master-0 kubenswrapper[31456]: I0312 21:27:29.823985 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:27:29.838549 master-0 kubenswrapper[31456]: I0312 21:27:29.838475 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:27:29.853498 master-0 kubenswrapper[31456]: I0312 21:27:29.853403 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:27:29.868228 master-0 kubenswrapper[31456]: I0312 21:27:29.868174 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:27:29.868662 master-0 kubenswrapper[31456]: E0312 21:27:29.868642 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df574917-39ab-4063-ab80-d42902865c20" containerName="ironic-python-agent-init" Mar 12 21:27:29.868662 master-0 kubenswrapper[31456]: I0312 21:27:29.868659 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="df574917-39ab-4063-ab80-d42902865c20" containerName="ironic-python-agent-init" Mar 12 21:27:29.868724 master-0 kubenswrapper[31456]: E0312 21:27:29.868688 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-httpd" Mar 12 21:27:29.868724 master-0 kubenswrapper[31456]: I0312 21:27:29.868694 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-httpd" Mar 12 21:27:29.868724 master-0 kubenswrapper[31456]: E0312 21:27:29.868707 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-log" Mar 12 21:27:29.868724 master-0 kubenswrapper[31456]: I0312 21:27:29.868714 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-log" Mar 12 21:27:29.868724 master-0 kubenswrapper[31456]: E0312 21:27:29.868724 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-log" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868730 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-log" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: E0312 21:27:29.868742 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-log" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868748 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-log" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: E0312 21:27:29.868767 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-httpd" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868773 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-httpd" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: E0312 21:27:29.868785 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd45bf4-fd4f-4229-a6b9-d0433a367ee8" containerName="mariadb-account-create-update" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868793 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd45bf4-fd4f-4229-a6b9-d0433a367ee8" containerName="mariadb-account-create-update" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: E0312 21:27:29.868827 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-api" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868834 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-api" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: E0312 21:27:29.868844 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-api" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868851 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-api" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: E0312 21:27:29.868865 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-httpd" Mar 12 21:27:29.868926 master-0 kubenswrapper[31456]: I0312 21:27:29.868870 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-httpd" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869073 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-log" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869100 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-httpd" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869109 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="205534d7-c857-4999-8352-af039951ce48" containerName="placement-api" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869132 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd45bf4-fd4f-4229-a6b9-d0433a367ee8" containerName="mariadb-account-create-update" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869139 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" containerName="glance-log" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869150 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-log" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869168 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-api" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869182 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef9bfe4-ea23-4e3f-84a3-670b321fbeb0" containerName="neutron-httpd" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869190 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a5e241-7146-489b-b32b-01218601b895" containerName="glance-httpd" Mar 12 21:27:29.869286 master-0 kubenswrapper[31456]: I0312 21:27:29.869208 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="df574917-39ab-4063-ab80-d42902865c20" containerName="ironic-python-agent-init" Mar 12 21:27:29.870408 master-0 kubenswrapper[31456]: I0312 21:27:29.870383 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:29.872287 master-0 kubenswrapper[31456]: I0312 21:27:29.872246 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 21:27:29.874276 master-0 kubenswrapper[31456]: I0312 21:27:29.874237 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-external-config-data" Mar 12 21:27:29.874411 master-0 kubenswrapper[31456]: I0312 21:27:29.874391 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 21:27:29.885827 master-0 kubenswrapper[31456]: I0312 21:27:29.885763 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:27:29.887874 master-0 kubenswrapper[31456]: I0312 21:27:29.887842 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:29.889476 master-0 kubenswrapper[31456]: I0312 21:27:29.889427 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-30e4b-default-internal-config-data" Mar 12 21:27:29.889756 master-0 kubenswrapper[31456]: I0312 21:27:29.889723 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 21:27:29.927730 master-0 kubenswrapper[31456]: I0312 21:27:29.927655 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:27:29.994095 master-0 kubenswrapper[31456]: I0312 21:27:29.989873 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:27:29.994095 master-0 kubenswrapper[31456]: I0312 21:27:29.992101 31456 generic.go:334] "Generic (PLEG): container finished" podID="c91b737e-1dc0-4977-8cc3-f36cde0b3031" containerID="22f864337ca9a32383fe2a970ed98c5e27b4b369102cc1c4802a58ec89716303" exitCode=0 Mar 12 21:27:29.994095 master-0 kubenswrapper[31456]: I0312 21:27:29.992163 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhn2f" event={"ID":"c91b737e-1dc0-4977-8cc3-f36cde0b3031","Type":"ContainerDied","Data":"22f864337ca9a32383fe2a970ed98c5e27b4b369102cc1c4802a58ec89716303"} Mar 12 21:27:30.004209 master-0 kubenswrapper[31456]: I0312 21:27:30.000239 31456 generic.go:334] "Generic (PLEG): container finished" podID="56b88bd7-c930-40cf-ab94-806f32d82a96" containerID="5e48cb2baa539a556bfebf2629096e7ff9a705ef46c5b822ac6883b02a8b113b" exitCode=0 Mar 12 21:27:30.004209 master-0 kubenswrapper[31456]: I0312 21:27:30.000307 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zgqpq" event={"ID":"56b88bd7-c930-40cf-ab94-806f32d82a96","Type":"ContainerDied","Data":"5e48cb2baa539a556bfebf2629096e7ff9a705ef46c5b822ac6883b02a8b113b"} Mar 12 21:27:30.015166 master-0 kubenswrapper[31456]: I0312 21:27:30.011749 31456 generic.go:334] "Generic (PLEG): container finished" podID="31856960-9d64-482a-b18d-3cb7ebc781d7" containerID="bc43cfb6b29c430500ac0433c7ea2b4e1009a6f4a158a8fcb0c14d149a55d20f" exitCode=0 Mar 12 21:27:30.015166 master-0 kubenswrapper[31456]: I0312 21:27:30.011800 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" event={"ID":"31856960-9d64-482a-b18d-3cb7ebc781d7","Type":"ContainerDied","Data":"bc43cfb6b29c430500ac0433c7ea2b4e1009a6f4a158a8fcb0c14d149a55d20f"} Mar 12 21:27:30.038793 master-0 kubenswrapper[31456]: I0312 21:27:30.036139 31456 generic.go:334] "Generic (PLEG): container finished" podID="c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b" containerID="8c299b5c5048b6f4186078b727f129704bb4bd63f98adf3e6d166328f4d4e11a" exitCode=0 Mar 12 21:27:30.038793 master-0 kubenswrapper[31456]: I0312 21:27:30.036210 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" event={"ID":"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b","Type":"ContainerDied","Data":"8c299b5c5048b6f4186078b727f129704bb4bd63f98adf3e6d166328f4d4e11a"} Mar 12 21:27:30.051835 master-0 kubenswrapper[31456]: I0312 21:27:30.049184 31456 generic.go:334] "Generic (PLEG): container finished" podID="93110548-5710-4149-bd72-8e42693c948e" containerID="50c9f3835a6f0727444b947fd7f2674bfa02781d2dad8b09e8ecf8c77c1e0daf" exitCode=0 Mar 12 21:27:30.051835 master-0 kubenswrapper[31456]: I0312 21:27:30.049256 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerDied","Data":"50c9f3835a6f0727444b947fd7f2674bfa02781d2dad8b09e8ecf8c77c1e0daf"} Mar 12 21:27:30.051835 master-0 kubenswrapper[31456]: I0312 21:27:30.051024 31456 generic.go:334] "Generic (PLEG): container finished" podID="1b7fef8e-4472-45c9-9824-4a897ff1b1e3" containerID="d885db32ad07e56acb0300a0a07debb39d1a51f333ac6eb255e87b99455cda09" exitCode=0 Mar 12 21:27:30.051835 master-0 kubenswrapper[31456]: I0312 21:27:30.051225 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wc97w" event={"ID":"1b7fef8e-4472-45c9-9824-4a897ff1b1e3","Type":"ContainerDied","Data":"d885db32ad07e56acb0300a0a07debb39d1a51f333ac6eb255e87b99455cda09"} Mar 12 21:27:30.051835 master-0 kubenswrapper[31456]: I0312 21:27:30.051472 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081095 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52397d12-7374-47b1-aab8-8e25fa33775b-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081162 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-public-tls-certs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081186 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf5zm\" (UniqueName: \"kubernetes.io/projected/0740fcb3-98cc-49d7-b0c9-3c445c35a846-kube-api-access-jf5zm\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081294 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081341 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081516 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qn9\" (UniqueName: \"kubernetes.io/projected/52397d12-7374-47b1-aab8-8e25fa33775b-kube-api-access-88qn9\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081559 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740fcb3-98cc-49d7-b0c9-3c445c35a846-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081769 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-internal-tls-certs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081785 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.081829 master-0 kubenswrapper[31456]: I0312 21:27:30.081801 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.082337 master-0 kubenswrapper[31456]: I0312 21:27:30.081867 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.082337 master-0 kubenswrapper[31456]: I0312 21:27:30.081890 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52397d12-7374-47b1-aab8-8e25fa33775b-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.082337 master-0 kubenswrapper[31456]: I0312 21:27:30.081920 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0740fcb3-98cc-49d7-b0c9-3c445c35a846-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.082337 master-0 kubenswrapper[31456]: I0312 21:27:30.081946 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.082337 master-0 kubenswrapper[31456]: I0312 21:27:30.082053 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.082337 master-0 kubenswrapper[31456]: I0312 21:27:30.082135 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.438583 master-0 kubenswrapper[31456]: I0312 21:27:30.438490 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.438583 master-0 kubenswrapper[31456]: I0312 21:27:30.438571 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52397d12-7374-47b1-aab8-8e25fa33775b-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.438846 master-0 kubenswrapper[31456]: I0312 21:27:30.438606 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-public-tls-certs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.438846 master-0 kubenswrapper[31456]: I0312 21:27:30.438626 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf5zm\" (UniqueName: \"kubernetes.io/projected/0740fcb3-98cc-49d7-b0c9-3c445c35a846-kube-api-access-jf5zm\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.438846 master-0 kubenswrapper[31456]: I0312 21:27:30.438651 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.438846 master-0 kubenswrapper[31456]: I0312 21:27:30.438672 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.438846 master-0 kubenswrapper[31456]: I0312 21:27:30.438754 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88qn9\" (UniqueName: \"kubernetes.io/projected/52397d12-7374-47b1-aab8-8e25fa33775b-kube-api-access-88qn9\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.438846 master-0 kubenswrapper[31456]: I0312 21:27:30.438772 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740fcb3-98cc-49d7-b0c9-3c445c35a846-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.438885 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.438911 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-internal-tls-certs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.438926 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.438960 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.438976 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52397d12-7374-47b1-aab8-8e25fa33775b-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.438991 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0740fcb3-98cc-49d7-b0c9-3c445c35a846-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.439037 master-0 kubenswrapper[31456]: I0312 21:27:30.439009 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.439240 master-0 kubenswrapper[31456]: I0312 21:27:30.439087 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.439873 master-0 kubenswrapper[31456]: I0312 21:27:30.439683 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0740fcb3-98cc-49d7-b0c9-3c445c35a846-logs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.449009 master-0 kubenswrapper[31456]: I0312 21:27:30.448097 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52397d12-7374-47b1-aab8-8e25fa33775b-httpd-run\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.449212 master-0 kubenswrapper[31456]: I0312 21:27:30.449073 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0740fcb3-98cc-49d7-b0c9-3c445c35a846-httpd-run\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.454658 master-0 kubenswrapper[31456]: I0312 21:27:30.454556 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52397d12-7374-47b1-aab8-8e25fa33775b-logs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.465532 master-0 kubenswrapper[31456]: I0312 21:27:30.462212 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:27:30.465532 master-0 kubenswrapper[31456]: I0312 21:27:30.462259 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3b47ef71cabc18af87317356c30c781b24b16858528acb95d991bfdc6fcfef3f/globalmount\"" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.484757 master-0 kubenswrapper[31456]: I0312 21:27:30.468800 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-combined-ca-bundle\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.484757 master-0 kubenswrapper[31456]: I0312 21:27:30.471299 31456 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 21:27:30.484757 master-0 kubenswrapper[31456]: I0312 21:27:30.471363 31456 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/43685901e29eb1cf6142e4c7db2bf2a74bc59e8789b390024af9a8010a27963c/globalmount\"" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.492392 master-0 kubenswrapper[31456]: I0312 21:27:30.491895 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-scripts\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.494109 master-0 kubenswrapper[31456]: I0312 21:27:30.493704 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-combined-ca-bundle\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.498159 master-0 kubenswrapper[31456]: I0312 21:27:30.497614 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88qn9\" (UniqueName: \"kubernetes.io/projected/52397d12-7374-47b1-aab8-8e25fa33775b-kube-api-access-88qn9\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.511585 master-0 kubenswrapper[31456]: I0312 21:27:30.511548 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-config-data\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.526772 master-0 kubenswrapper[31456]: I0312 21:27:30.516782 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-config-data\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.526772 master-0 kubenswrapper[31456]: I0312 21:27:30.525438 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:30.526772 master-0 kubenswrapper[31456]: I0312 21:27:30.526255 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf5zm\" (UniqueName: \"kubernetes.io/projected/0740fcb3-98cc-49d7-b0c9-3c445c35a846-kube-api-access-jf5zm\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.541297 master-0 kubenswrapper[31456]: I0312 21:27:30.533780 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-scripts\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.541297 master-0 kubenswrapper[31456]: I0312 21:27:30.534512 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52397d12-7374-47b1-aab8-8e25fa33775b-public-tls-certs\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:30.541297 master-0 kubenswrapper[31456]: I0312 21:27:30.540358 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740fcb3-98cc-49d7-b0c9-3c445c35a846-internal-tls-certs\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:30.569281 master-0 kubenswrapper[31456]: I0312 21:27:30.564319 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:30.597851 master-0 kubenswrapper[31456]: I0312 21:27:30.597769 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:30.603768 master-0 kubenswrapper[31456]: I0312 21:27:30.603670 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 21:27:30.616523 master-0 kubenswrapper[31456]: I0312 21:27:30.616466 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 12 21:27:30.616995 master-0 kubenswrapper[31456]: I0312 21:27:30.616973 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 12 21:27:30.617534 master-0 kubenswrapper[31456]: I0312 21:27:30.617489 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 12 21:27:30.618972 master-0 kubenswrapper[31456]: I0312 21:27:30.617944 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 12 21:27:30.624682 master-0 kubenswrapper[31456]: I0312 21:27:30.624484 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 12 21:27:30.660360 master-0 kubenswrapper[31456]: I0312 21:27:30.660295 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:30.756532 master-0 kubenswrapper[31456]: I0312 21:27:30.756419 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.756532 master-0 kubenswrapper[31456]: I0312 21:27:30.756500 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.756879 master-0 kubenswrapper[31456]: I0312 21:27:30.756533 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-config\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.756879 master-0 kubenswrapper[31456]: I0312 21:27:30.756606 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.757081 master-0 kubenswrapper[31456]: I0312 21:27:30.756919 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.757235 master-0 kubenswrapper[31456]: I0312 21:27:30.757183 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.757321 master-0 kubenswrapper[31456]: I0312 21:27:30.757295 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8tj\" (UniqueName: \"kubernetes.io/projected/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-kube-api-access-vr8tj\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.757451 master-0 kubenswrapper[31456]: I0312 21:27:30.757374 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.757540 master-0 kubenswrapper[31456]: I0312 21:27:30.757508 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-scripts\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860188 master-0 kubenswrapper[31456]: I0312 21:27:30.860116 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860188 master-0 kubenswrapper[31456]: I0312 21:27:30.860192 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8tj\" (UniqueName: \"kubernetes.io/projected/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-kube-api-access-vr8tj\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860874 master-0 kubenswrapper[31456]: I0312 21:27:30.860221 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860874 master-0 kubenswrapper[31456]: I0312 21:27:30.860440 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-scripts\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860874 master-0 kubenswrapper[31456]: I0312 21:27:30.860727 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860874 master-0 kubenswrapper[31456]: I0312 21:27:30.860738 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.860874 master-0 kubenswrapper[31456]: I0312 21:27:30.860854 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.861041 master-0 kubenswrapper[31456]: I0312 21:27:30.860927 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-config\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.861041 master-0 kubenswrapper[31456]: I0312 21:27:30.860975 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.861110 master-0 kubenswrapper[31456]: I0312 21:27:30.861091 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.861297 master-0 kubenswrapper[31456]: I0312 21:27:30.861271 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.865829 master-0 kubenswrapper[31456]: I0312 21:27:30.864781 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.865829 master-0 kubenswrapper[31456]: I0312 21:27:30.865342 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.867829 master-0 kubenswrapper[31456]: I0312 21:27:30.867772 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-scripts\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.868151 master-0 kubenswrapper[31456]: I0312 21:27:30.868121 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.869503 master-0 kubenswrapper[31456]: I0312 21:27:30.869481 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-config\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.877313 master-0 kubenswrapper[31456]: I0312 21:27:30.877267 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.877415 master-0 kubenswrapper[31456]: I0312 21:27:30.877282 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8tj\" (UniqueName: \"kubernetes.io/projected/d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717-kube-api-access-vr8tj\") pod \"ironic-inspector-0\" (UID: \"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717\") " pod="openstack/ironic-inspector-0" Mar 12 21:27:30.960907 master-0 kubenswrapper[31456]: I0312 21:27:30.960743 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 12 21:27:31.184821 master-0 kubenswrapper[31456]: I0312 21:27:31.184744 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a5b367-8419-4864-9317-7b78c50cad2d" path="/var/lib/kubelet/pods/35a5b367-8419-4864-9317-7b78c50cad2d/volumes" Mar 12 21:27:31.185574 master-0 kubenswrapper[31456]: I0312 21:27:31.185544 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a5e241-7146-489b-b32b-01218601b895" path="/var/lib/kubelet/pods/a7a5e241-7146-489b-b32b-01218601b895/volumes" Mar 12 21:27:31.186255 master-0 kubenswrapper[31456]: I0312 21:27:31.186229 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df574917-39ab-4063-ab80-d42902865c20" path="/var/lib/kubelet/pods/df574917-39ab-4063-ab80-d42902865c20/volumes" Mar 12 21:27:31.443801 master-0 kubenswrapper[31456]: I0312 21:27:31.443723 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-771d56ec-6f7c-4891-8052-556577fed26a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c4d34f1a-cd35-4884-8afc-4ae2cb2c6555\") pod \"glance-30e4b-default-external-api-0\" (UID: \"52397d12-7374-47b1-aab8-8e25fa33775b\") " pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:31.686989 master-0 kubenswrapper[31456]: I0312 21:27:31.686922 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:32.135719 master-0 kubenswrapper[31456]: I0312 21:27:32.135563 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wc97w" event={"ID":"1b7fef8e-4472-45c9-9824-4a897ff1b1e3","Type":"ContainerDied","Data":"fd605a896442dba76b0f526abe6af929f7f2bd9e6dbd5762968629d25a41036e"} Mar 12 21:27:32.135845 master-0 kubenswrapper[31456]: I0312 21:27:32.135727 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd605a896442dba76b0f526abe6af929f7f2bd9e6dbd5762968629d25a41036e" Mar 12 21:27:32.144173 master-0 kubenswrapper[31456]: I0312 21:27:32.144113 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" event={"ID":"31856960-9d64-482a-b18d-3cb7ebc781d7","Type":"ContainerDied","Data":"f2ddf16eb1feaefd6527103be4c2a8201c463b443bef8451a2ea9c6ba4c0815a"} Mar 12 21:27:32.144173 master-0 kubenswrapper[31456]: I0312 21:27:32.144163 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2ddf16eb1feaefd6527103be4c2a8201c463b443bef8451a2ea9c6ba4c0815a" Mar 12 21:27:32.151431 master-0 kubenswrapper[31456]: I0312 21:27:32.151174 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" event={"ID":"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b","Type":"ContainerDied","Data":"4cf4f93edc7af97b8f2cb5f7e9a8505710116804fb78a872e83c9f8235d13ce6"} Mar 12 21:27:32.151431 master-0 kubenswrapper[31456]: I0312 21:27:32.151227 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf4f93edc7af97b8f2cb5f7e9a8505710116804fb78a872e83c9f8235d13ce6" Mar 12 21:27:32.237125 master-0 kubenswrapper[31456]: I0312 21:27:32.237085 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 12 21:27:32.299005 master-0 kubenswrapper[31456]: I0312 21:27:32.298935 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:32.341497 master-0 kubenswrapper[31456]: I0312 21:27:32.339598 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:32.346589 master-0 kubenswrapper[31456]: I0312 21:27:32.346562 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:32.357864 master-0 kubenswrapper[31456]: I0312 21:27:32.357597 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-46263d7f-fe44-41c1-8ff4-0f4db04f556e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4b842188-a8b2-4def-ad0e-7cbb4053b9e9\") pod \"glance-30e4b-default-internal-api-0\" (UID: \"0740fcb3-98cc-49d7-b0c9-3c445c35a846\") " pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:32.364330 master-0 kubenswrapper[31456]: I0312 21:27:32.364285 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:32.385082 master-0 kubenswrapper[31456]: I0312 21:27:32.385034 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:32.417591 master-0 kubenswrapper[31456]: I0312 21:27:32.417552 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4fdk\" (UniqueName: \"kubernetes.io/projected/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-kube-api-access-v4fdk\") pod \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " Mar 12 21:27:32.417815 master-0 kubenswrapper[31456]: I0312 21:27:32.417657 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-operator-scripts\") pod \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\" (UID: \"c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b\") " Mar 12 21:27:32.418399 master-0 kubenswrapper[31456]: I0312 21:27:32.418129 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b" (UID: "c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:32.418464 master-0 kubenswrapper[31456]: I0312 21:27:32.418425 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.420683 master-0 kubenswrapper[31456]: I0312 21:27:32.420617 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-kube-api-access-v4fdk" (OuterVolumeSpecName: "kube-api-access-v4fdk") pod "c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b" (UID: "c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b"). InnerVolumeSpecName "kube-api-access-v4fdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:32.509979 master-0 kubenswrapper[31456]: W0312 21:27:32.509921 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52397d12_7374_47b1_aab8_8e25fa33775b.slice/crio-0bc11a3245b3785482186533e8b5d7363602ae1276e481c173453a16f2966662 WatchSource:0}: Error finding container 0bc11a3245b3785482186533e8b5d7363602ae1276e481c173453a16f2966662: Status 404 returned error can't find the container with id 0bc11a3245b3785482186533e8b5d7363602ae1276e481c173453a16f2966662 Mar 12 21:27:32.520831 master-0 kubenswrapper[31456]: I0312 21:27:32.516790 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-external-api-0"] Mar 12 21:27:32.521942 master-0 kubenswrapper[31456]: I0312 21:27:32.521910 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31856960-9d64-482a-b18d-3cb7ebc781d7-operator-scripts\") pod \"31856960-9d64-482a-b18d-3cb7ebc781d7\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " Mar 12 21:27:32.522005 master-0 kubenswrapper[31456]: I0312 21:27:32.521950 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf58t\" (UniqueName: \"kubernetes.io/projected/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-kube-api-access-zf58t\") pod \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " Mar 12 21:27:32.522005 master-0 kubenswrapper[31456]: I0312 21:27:32.521998 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91b737e-1dc0-4977-8cc3-f36cde0b3031-operator-scripts\") pod \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " Mar 12 21:27:32.522083 master-0 kubenswrapper[31456]: I0312 21:27:32.522038 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56b88bd7-c930-40cf-ab94-806f32d82a96-operator-scripts\") pod \"56b88bd7-c930-40cf-ab94-806f32d82a96\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " Mar 12 21:27:32.522083 master-0 kubenswrapper[31456]: I0312 21:27:32.522065 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94csr\" (UniqueName: \"kubernetes.io/projected/31856960-9d64-482a-b18d-3cb7ebc781d7-kube-api-access-94csr\") pod \"31856960-9d64-482a-b18d-3cb7ebc781d7\" (UID: \"31856960-9d64-482a-b18d-3cb7ebc781d7\") " Mar 12 21:27:32.522219 master-0 kubenswrapper[31456]: I0312 21:27:32.522185 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-operator-scripts\") pod \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\" (UID: \"1b7fef8e-4472-45c9-9824-4a897ff1b1e3\") " Mar 12 21:27:32.522252 master-0 kubenswrapper[31456]: I0312 21:27:32.522229 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zb8g\" (UniqueName: \"kubernetes.io/projected/c91b737e-1dc0-4977-8cc3-f36cde0b3031-kube-api-access-5zb8g\") pod \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\" (UID: \"c91b737e-1dc0-4977-8cc3-f36cde0b3031\") " Mar 12 21:27:32.522282 master-0 kubenswrapper[31456]: I0312 21:27:32.522269 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d449q\" (UniqueName: \"kubernetes.io/projected/56b88bd7-c930-40cf-ab94-806f32d82a96-kube-api-access-d449q\") pod \"56b88bd7-c930-40cf-ab94-806f32d82a96\" (UID: \"56b88bd7-c930-40cf-ab94-806f32d82a96\") " Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.522820 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4fdk\" (UniqueName: \"kubernetes.io/projected/c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b-kube-api-access-v4fdk\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.526555 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b88bd7-c930-40cf-ab94-806f32d82a96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56b88bd7-c930-40cf-ab94-806f32d82a96" (UID: "56b88bd7-c930-40cf-ab94-806f32d82a96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.526852 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c91b737e-1dc0-4977-8cc3-f36cde0b3031-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c91b737e-1dc0-4977-8cc3-f36cde0b3031" (UID: "c91b737e-1dc0-4977-8cc3-f36cde0b3031"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.527071 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31856960-9d64-482a-b18d-3cb7ebc781d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31856960-9d64-482a-b18d-3cb7ebc781d7" (UID: "31856960-9d64-482a-b18d-3cb7ebc781d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.527601 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b7fef8e-4472-45c9-9824-4a897ff1b1e3" (UID: "1b7fef8e-4472-45c9-9824-4a897ff1b1e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.527880 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b88bd7-c930-40cf-ab94-806f32d82a96-kube-api-access-d449q" (OuterVolumeSpecName: "kube-api-access-d449q") pod "56b88bd7-c930-40cf-ab94-806f32d82a96" (UID: "56b88bd7-c930-40cf-ab94-806f32d82a96"). InnerVolumeSpecName "kube-api-access-d449q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:32.529825 master-0 kubenswrapper[31456]: I0312 21:27:32.529031 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-kube-api-access-zf58t" (OuterVolumeSpecName: "kube-api-access-zf58t") pod "1b7fef8e-4472-45c9-9824-4a897ff1b1e3" (UID: "1b7fef8e-4472-45c9-9824-4a897ff1b1e3"). InnerVolumeSpecName "kube-api-access-zf58t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:32.530175 master-0 kubenswrapper[31456]: I0312 21:27:32.529838 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31856960-9d64-482a-b18d-3cb7ebc781d7-kube-api-access-94csr" (OuterVolumeSpecName: "kube-api-access-94csr") pod "31856960-9d64-482a-b18d-3cb7ebc781d7" (UID: "31856960-9d64-482a-b18d-3cb7ebc781d7"). InnerVolumeSpecName "kube-api-access-94csr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:32.536270 master-0 kubenswrapper[31456]: I0312 21:27:32.530682 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c91b737e-1dc0-4977-8cc3-f36cde0b3031-kube-api-access-5zb8g" (OuterVolumeSpecName: "kube-api-access-5zb8g") pod "c91b737e-1dc0-4977-8cc3-f36cde0b3031" (UID: "c91b737e-1dc0-4977-8cc3-f36cde0b3031"). InnerVolumeSpecName "kube-api-access-5zb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631796 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56b88bd7-c930-40cf-ab94-806f32d82a96-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631862 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94csr\" (UniqueName: \"kubernetes.io/projected/31856960-9d64-482a-b18d-3cb7ebc781d7-kube-api-access-94csr\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631871 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631881 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zb8g\" (UniqueName: \"kubernetes.io/projected/c91b737e-1dc0-4977-8cc3-f36cde0b3031-kube-api-access-5zb8g\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631891 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d449q\" (UniqueName: \"kubernetes.io/projected/56b88bd7-c930-40cf-ab94-806f32d82a96-kube-api-access-d449q\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631899 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31856960-9d64-482a-b18d-3cb7ebc781d7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631908 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf58t\" (UniqueName: \"kubernetes.io/projected/1b7fef8e-4472-45c9-9824-4a897ff1b1e3-kube-api-access-zf58t\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.631941 master-0 kubenswrapper[31456]: I0312 21:27:32.631918 31456 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c91b737e-1dc0-4977-8cc3-f36cde0b3031-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:32.659689 master-0 kubenswrapper[31456]: I0312 21:27:32.659330 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:33.163744 master-0 kubenswrapper[31456]: I0312 21:27:33.163687 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"52397d12-7374-47b1-aab8-8e25fa33775b","Type":"ContainerStarted","Data":"0bc11a3245b3785482186533e8b5d7363602ae1276e481c173453a16f2966662"} Mar 12 21:27:33.164922 master-0 kubenswrapper[31456]: I0312 21:27:33.164891 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zgqpq" event={"ID":"56b88bd7-c930-40cf-ab94-806f32d82a96","Type":"ContainerDied","Data":"e22bebe7af02080316d7e08fcbd1a25fbe699f5c48713bb76a253c39dd883ce3"} Mar 12 21:27:33.164922 master-0 kubenswrapper[31456]: I0312 21:27:33.164919 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22bebe7af02080316d7e08fcbd1a25fbe699f5c48713bb76a253c39dd883ce3" Mar 12 21:27:33.165023 master-0 kubenswrapper[31456]: I0312 21:27:33.164972 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zgqpq" Mar 12 21:27:33.167436 master-0 kubenswrapper[31456]: I0312 21:27:33.167405 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rhn2f" event={"ID":"c91b737e-1dc0-4977-8cc3-f36cde0b3031","Type":"ContainerDied","Data":"a8b71217dd3a34d70eaee1de325f6d4334ecc549a9bc83ef3550c82a4ede8cd9"} Mar 12 21:27:33.167436 master-0 kubenswrapper[31456]: I0312 21:27:33.167430 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b71217dd3a34d70eaee1de325f6d4334ecc549a9bc83ef3550c82a4ede8cd9" Mar 12 21:27:33.167926 master-0 kubenswrapper[31456]: I0312 21:27:33.167461 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rhn2f" Mar 12 21:27:33.170644 master-0 kubenswrapper[31456]: I0312 21:27:33.170553 31456 generic.go:334] "Generic (PLEG): container finished" podID="d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717" containerID="3309bb83fafbdd2c04774ad6f0678cf2344f948f9577ebf6160c4eb59098398d" exitCode=0 Mar 12 21:27:33.170644 master-0 kubenswrapper[31456]: I0312 21:27:33.170668 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5fda-account-create-update-jj52w" Mar 12 21:27:33.171402 master-0 kubenswrapper[31456]: I0312 21:27:33.171076 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1f7d-account-create-update-ckqfv" Mar 12 21:27:33.171402 master-0 kubenswrapper[31456]: I0312 21:27:33.171091 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wc97w" Mar 12 21:27:33.229486 master-0 kubenswrapper[31456]: I0312 21:27:33.228182 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerDied","Data":"3309bb83fafbdd2c04774ad6f0678cf2344f948f9577ebf6160c4eb59098398d"} Mar 12 21:27:33.229486 master-0 kubenswrapper[31456]: I0312 21:27:33.228220 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerStarted","Data":"a7aa7233abefa941751c5458ec5698ebb9213ae9d080b8c92f19124bfc0b22ee"} Mar 12 21:27:33.338953 master-0 kubenswrapper[31456]: I0312 21:27:33.338904 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-30e4b-default-internal-api-0"] Mar 12 21:27:34.228050 master-0 kubenswrapper[31456]: I0312 21:27:34.227997 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:27:34.245944 master-0 kubenswrapper[31456]: I0312 21:27:34.245829 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"0740fcb3-98cc-49d7-b0c9-3c445c35a846","Type":"ContainerStarted","Data":"e1e0b87928d1ed252913d62af0768cc3b1561bac89ebeb1fc2a39b7649883e53"} Mar 12 21:27:34.245944 master-0 kubenswrapper[31456]: I0312 21:27:34.245886 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"0740fcb3-98cc-49d7-b0c9-3c445c35a846","Type":"ContainerStarted","Data":"373f4bf261c66afb47c63161402c068a1c9af97ee42e8001e8d228a762c7865a"} Mar 12 21:27:34.249256 master-0 kubenswrapper[31456]: I0312 21:27:34.249213 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"52397d12-7374-47b1-aab8-8e25fa33775b","Type":"ContainerStarted","Data":"13756a02e1d81cd648140e8bbf08ec050608e307e5ee21061161a129af5f4c40"} Mar 12 21:27:34.249442 master-0 kubenswrapper[31456]: I0312 21:27:34.249386 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-external-api-0" event={"ID":"52397d12-7374-47b1-aab8-8e25fa33775b","Type":"ContainerStarted","Data":"1dbf11b6b784d936d0c28145290436de97fbd7b0aedd0c4002a1d6f88adc1c55"} Mar 12 21:27:34.335013 master-0 kubenswrapper[31456]: I0312 21:27:34.324775 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c46756b57-z2p86"] Mar 12 21:27:34.335299 master-0 kubenswrapper[31456]: I0312 21:27:34.335252 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerName="dnsmasq-dns" containerID="cri-o://9789fac5fae792aebde470636b7a48ac38828a65457cc09dab808d0326628d9a" gracePeriod=10 Mar 12 21:27:34.513252 master-0 kubenswrapper[31456]: I0312 21:27:34.375401 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-30e4b-default-external-api-0" podStartSLOduration=5.3753811989999996 podStartE2EDuration="5.375381199s" podCreationTimestamp="2026-03-12 21:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:34.323606366 +0000 UTC m=+1115.398211694" watchObservedRunningTime="2026-03-12 21:27:34.375381199 +0000 UTC m=+1115.449986527" Mar 12 21:27:35.300924 master-0 kubenswrapper[31456]: I0312 21:27:35.300090 31456 generic.go:334] "Generic (PLEG): container finished" podID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerID="9789fac5fae792aebde470636b7a48ac38828a65457cc09dab808d0326628d9a" exitCode=0 Mar 12 21:27:35.300924 master-0 kubenswrapper[31456]: I0312 21:27:35.300189 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" event={"ID":"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae","Type":"ContainerDied","Data":"9789fac5fae792aebde470636b7a48ac38828a65457cc09dab808d0326628d9a"} Mar 12 21:27:35.305859 master-0 kubenswrapper[31456]: I0312 21:27:35.305792 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-30e4b-default-internal-api-0" event={"ID":"0740fcb3-98cc-49d7-b0c9-3c445c35a846","Type":"ContainerStarted","Data":"448273362fb419c97203e91ad1f5bbdf7ab7dc106cc58384452e75165415d7c6"} Mar 12 21:27:35.380248 master-0 kubenswrapper[31456]: I0312 21:27:35.380111 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-30e4b-default-internal-api-0" podStartSLOduration=6.380032959 podStartE2EDuration="6.380032959s" podCreationTimestamp="2026-03-12 21:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:27:35.359953343 +0000 UTC m=+1116.434558681" watchObservedRunningTime="2026-03-12 21:27:35.380032959 +0000 UTC m=+1116.454638287" Mar 12 21:27:37.050225 master-0 kubenswrapper[31456]: I0312 21:27:37.048142 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.073050 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kn96n"] Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075366 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7fef8e-4472-45c9-9824-4a897ff1b1e3" containerName="mariadb-database-create" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075395 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7fef8e-4472-45c9-9824-4a897ff1b1e3" containerName="mariadb-database-create" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075448 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerName="dnsmasq-dns" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075459 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerName="dnsmasq-dns" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075513 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerName="init" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075524 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerName="init" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075551 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c91b737e-1dc0-4977-8cc3-f36cde0b3031" containerName="mariadb-database-create" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075561 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c91b737e-1dc0-4977-8cc3-f36cde0b3031" containerName="mariadb-database-create" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075633 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31856960-9d64-482a-b18d-3cb7ebc781d7" containerName="mariadb-account-create-update" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075646 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="31856960-9d64-482a-b18d-3cb7ebc781d7" containerName="mariadb-account-create-update" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075685 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b" containerName="mariadb-account-create-update" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075696 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b" containerName="mariadb-account-create-update" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: E0312 21:27:37.075706 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b88bd7-c930-40cf-ab94-806f32d82a96" containerName="mariadb-database-create" Mar 12 21:27:37.075829 master-0 kubenswrapper[31456]: I0312 21:27:37.075714 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b88bd7-c930-40cf-ab94-806f32d82a96" containerName="mariadb-database-create" Mar 12 21:27:37.080600 master-0 kubenswrapper[31456]: I0312 21:27:37.076955 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b88bd7-c930-40cf-ab94-806f32d82a96" containerName="mariadb-database-create" Mar 12 21:27:37.080600 master-0 kubenswrapper[31456]: I0312 21:27:37.077194 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7fef8e-4472-45c9-9824-4a897ff1b1e3" containerName="mariadb-database-create" Mar 12 21:27:37.080600 master-0 kubenswrapper[31456]: I0312 21:27:37.077279 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" containerName="dnsmasq-dns" Mar 12 21:27:37.080600 master-0 kubenswrapper[31456]: I0312 21:27:37.077313 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91b737e-1dc0-4977-8cc3-f36cde0b3031" containerName="mariadb-database-create" Mar 12 21:27:37.080600 master-0 kubenswrapper[31456]: I0312 21:27:37.077364 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a2a5e6-44b0-4a11-bdd7-a5153a809b8b" containerName="mariadb-account-create-update" Mar 12 21:27:37.080600 master-0 kubenswrapper[31456]: I0312 21:27:37.077398 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="31856960-9d64-482a-b18d-3cb7ebc781d7" containerName="mariadb-account-create-update" Mar 12 21:27:37.089795 master-0 kubenswrapper[31456]: I0312 21:27:37.087432 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.096691 master-0 kubenswrapper[31456]: I0312 21:27:37.096603 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 21:27:37.098280 master-0 kubenswrapper[31456]: I0312 21:27:37.098232 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 21:27:37.117352 master-0 kubenswrapper[31456]: I0312 21:27:37.117273 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kn96n"] Mar 12 21:27:37.247088 master-0 kubenswrapper[31456]: I0312 21:27:37.247010 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-svc\") pod \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " Mar 12 21:27:37.247088 master-0 kubenswrapper[31456]: I0312 21:27:37.247080 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-sb\") pod \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " Mar 12 21:27:37.247088 master-0 kubenswrapper[31456]: I0312 21:27:37.247102 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-config\") pod \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " Mar 12 21:27:37.247088 master-0 kubenswrapper[31456]: I0312 21:27:37.247126 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt2ls\" (UniqueName: \"kubernetes.io/projected/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-kube-api-access-qt2ls\") pod \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " Mar 12 21:27:37.247500 master-0 kubenswrapper[31456]: I0312 21:27:37.247313 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-nb\") pod \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " Mar 12 21:27:37.247500 master-0 kubenswrapper[31456]: I0312 21:27:37.247352 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-swift-storage-0\") pod \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\" (UID: \"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae\") " Mar 12 21:27:37.247822 master-0 kubenswrapper[31456]: I0312 21:27:37.247751 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-scripts\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.247822 master-0 kubenswrapper[31456]: I0312 21:27:37.247787 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.247938 master-0 kubenswrapper[31456]: I0312 21:27:37.247864 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-config-data\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.247938 master-0 kubenswrapper[31456]: I0312 21:27:37.247889 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2ww4\" (UniqueName: \"kubernetes.io/projected/f0943c54-38ae-416e-bb08-6921de369d2a-kube-api-access-p2ww4\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.252557 master-0 kubenswrapper[31456]: I0312 21:27:37.252497 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-kube-api-access-qt2ls" (OuterVolumeSpecName: "kube-api-access-qt2ls") pod "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" (UID: "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae"). InnerVolumeSpecName "kube-api-access-qt2ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:27:37.304178 master-0 kubenswrapper[31456]: I0312 21:27:37.303060 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" (UID: "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:37.317885 master-0 kubenswrapper[31456]: I0312 21:27:37.316851 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" (UID: "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:37.336286 master-0 kubenswrapper[31456]: I0312 21:27:37.336239 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" (UID: "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:37.350437 master-0 kubenswrapper[31456]: I0312 21:27:37.350365 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-scripts\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.350437 master-0 kubenswrapper[31456]: I0312 21:27:37.350424 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.351979 master-0 kubenswrapper[31456]: I0312 21:27:37.350504 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-config-data\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.351979 master-0 kubenswrapper[31456]: I0312 21:27:37.350526 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2ww4\" (UniqueName: \"kubernetes.io/projected/f0943c54-38ae-416e-bb08-6921de369d2a-kube-api-access-p2ww4\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.351979 master-0 kubenswrapper[31456]: I0312 21:27:37.350886 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:37.351979 master-0 kubenswrapper[31456]: I0312 21:27:37.350902 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:37.351979 master-0 kubenswrapper[31456]: I0312 21:27:37.350916 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:37.351979 master-0 kubenswrapper[31456]: I0312 21:27:37.350927 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt2ls\" (UniqueName: \"kubernetes.io/projected/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-kube-api-access-qt2ls\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:37.356740 master-0 kubenswrapper[31456]: I0312 21:27:37.356694 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-config-data\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.358851 master-0 kubenswrapper[31456]: I0312 21:27:37.358581 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" event={"ID":"d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae","Type":"ContainerDied","Data":"e85ae1ed526b685e6e5451b54776776c6e33212ab70c05267fde72f65c7ce10b"} Mar 12 21:27:37.358851 master-0 kubenswrapper[31456]: I0312 21:27:37.358637 31456 scope.go:117] "RemoveContainer" containerID="9789fac5fae792aebde470636b7a48ac38828a65457cc09dab808d0326628d9a" Mar 12 21:27:37.358851 master-0 kubenswrapper[31456]: I0312 21:27:37.358684 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c46756b57-z2p86" Mar 12 21:27:37.369577 master-0 kubenswrapper[31456]: I0312 21:27:37.369516 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-scripts\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.373688 master-0 kubenswrapper[31456]: I0312 21:27:37.372060 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.373688 master-0 kubenswrapper[31456]: I0312 21:27:37.373366 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2ww4\" (UniqueName: \"kubernetes.io/projected/f0943c54-38ae-416e-bb08-6921de369d2a-kube-api-access-p2ww4\") pod \"nova-cell0-conductor-db-sync-kn96n\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.390218 master-0 kubenswrapper[31456]: I0312 21:27:37.388831 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" (UID: "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:37.390218 master-0 kubenswrapper[31456]: I0312 21:27:37.389006 31456 scope.go:117] "RemoveContainer" containerID="e3c575ccab1a93d6beb8416023aed45152836bf62bb3f42180c73f5efca884c2" Mar 12 21:27:37.433695 master-0 kubenswrapper[31456]: I0312 21:27:37.433575 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-config" (OuterVolumeSpecName: "config") pod "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" (UID: "d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:27:37.454828 master-0 kubenswrapper[31456]: I0312 21:27:37.454761 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:37.455042 master-0 kubenswrapper[31456]: I0312 21:27:37.455022 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:27:37.518415 master-0 kubenswrapper[31456]: I0312 21:27:37.517744 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:27:37.719655 master-0 kubenswrapper[31456]: I0312 21:27:37.719603 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c46756b57-z2p86"] Mar 12 21:27:37.737297 master-0 kubenswrapper[31456]: I0312 21:27:37.737249 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c46756b57-z2p86"] Mar 12 21:27:38.068302 master-0 kubenswrapper[31456]: I0312 21:27:38.068058 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kn96n"] Mar 12 21:27:38.372014 master-0 kubenswrapper[31456]: I0312 21:27:38.371893 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kn96n" event={"ID":"f0943c54-38ae-416e-bb08-6921de369d2a","Type":"ContainerStarted","Data":"76edcd0bde2008f8ed29c7e5eba5420e425dfd6df796734dca23cb9e464e48ee"} Mar 12 21:27:38.374041 master-0 kubenswrapper[31456]: I0312 21:27:38.373980 31456 generic.go:334] "Generic (PLEG): container finished" podID="d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717" containerID="db060190429992af726bce0fc4f879bd5a30754aa98fe2ce8b61be467bfb4bf9" exitCode=0 Mar 12 21:27:38.374247 master-0 kubenswrapper[31456]: I0312 21:27:38.374078 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerDied","Data":"db060190429992af726bce0fc4f879bd5a30754aa98fe2ce8b61be467bfb4bf9"} Mar 12 21:27:38.382499 master-0 kubenswrapper[31456]: I0312 21:27:38.382441 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"04cebfa9ee3ae27945dc4f288c27a010c11a036298a87f570271091e7449a2c5"} Mar 12 21:27:39.207974 master-0 kubenswrapper[31456]: I0312 21:27:39.207899 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae" path="/var/lib/kubelet/pods/d8dd4edb-5676-4d1e-a91a-3b71d3fa8cae/volumes" Mar 12 21:27:39.476211 master-0 kubenswrapper[31456]: I0312 21:27:39.476070 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerStarted","Data":"45ab6eeb493d6aa5ebf33fecafb56efa3668f836e9112eaa42a02dc51b00daa9"} Mar 12 21:27:39.476211 master-0 kubenswrapper[31456]: I0312 21:27:39.476124 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerStarted","Data":"a06bab0fd86d38e346830133d47f42821e6a3fb606a7cc70efa9e55c898cee05"} Mar 12 21:27:40.491899 master-0 kubenswrapper[31456]: I0312 21:27:40.491660 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerStarted","Data":"2f8e78cc555027670094f578b433c6a641beadd7148cc3be4d3ca125a145a062"} Mar 12 21:27:40.491899 master-0 kubenswrapper[31456]: I0312 21:27:40.491707 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerStarted","Data":"eb8c04e0bf8084604dacb344e6ccc16e4391a30375aeaa1e3b2d461ed9e07a3d"} Mar 12 21:27:41.529330 master-0 kubenswrapper[31456]: I0312 21:27:41.527798 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d44bfe5a-e4f8-4bc7-84ac-c48cefc2a717","Type":"ContainerStarted","Data":"0720997d287537a9091206894ef4d7877836771957ad319882a746bd1737cd80"} Mar 12 21:27:41.532739 master-0 kubenswrapper[31456]: I0312 21:27:41.530522 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 21:27:41.532739 master-0 kubenswrapper[31456]: I0312 21:27:41.530566 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 21:27:41.687159 master-0 kubenswrapper[31456]: I0312 21:27:41.687101 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:41.687159 master-0 kubenswrapper[31456]: I0312 21:27:41.687164 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:41.750868 master-0 kubenswrapper[31456]: I0312 21:27:41.749859 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=7.899678512 podStartE2EDuration="11.749766283s" podCreationTimestamp="2026-03-12 21:27:30 +0000 UTC" firstStartedPulling="2026-03-12 21:27:33.1869191 +0000 UTC m=+1114.261524428" lastFinishedPulling="2026-03-12 21:27:37.037006871 +0000 UTC m=+1118.111612199" observedRunningTime="2026-03-12 21:27:41.733964321 +0000 UTC m=+1122.808569689" watchObservedRunningTime="2026-03-12 21:27:41.749766283 +0000 UTC m=+1122.824371631" Mar 12 21:27:41.762401 master-0 kubenswrapper[31456]: I0312 21:27:41.762346 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:41.762575 master-0 kubenswrapper[31456]: I0312 21:27:41.762442 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:42.559560 master-0 kubenswrapper[31456]: I0312 21:27:42.559493 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:42.559560 master-0 kubenswrapper[31456]: I0312 21:27:42.559554 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:42.661078 master-0 kubenswrapper[31456]: I0312 21:27:42.661003 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:42.661078 master-0 kubenswrapper[31456]: I0312 21:27:42.661085 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:42.713055 master-0 kubenswrapper[31456]: I0312 21:27:42.712995 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:42.853039 master-0 kubenswrapper[31456]: I0312 21:27:42.851883 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:43.567949 master-0 kubenswrapper[31456]: I0312 21:27:43.567561 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:43.567949 master-0 kubenswrapper[31456]: I0312 21:27:43.567618 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:43.612208 master-0 kubenswrapper[31456]: I0312 21:27:43.611730 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 21:27:44.597834 master-0 kubenswrapper[31456]: I0312 21:27:44.589958 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:27:44.597834 master-0 kubenswrapper[31456]: I0312 21:27:44.590013 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:27:44.597834 master-0 kubenswrapper[31456]: I0312 21:27:44.592127 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 21:27:45.961024 master-0 kubenswrapper[31456]: I0312 21:27:45.960971 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 21:27:45.963113 master-0 kubenswrapper[31456]: I0312 21:27:45.963082 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 12 21:27:49.660746 master-0 kubenswrapper[31456]: I0312 21:27:49.660650 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kn96n" event={"ID":"f0943c54-38ae-416e-bb08-6921de369d2a","Type":"ContainerStarted","Data":"2aec1413c7217c4f584489189a2de189c8fc83f96c62eb87bfca2c6d0ff171f0"} Mar 12 21:27:49.769600 master-0 kubenswrapper[31456]: I0312 21:27:49.769464 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kn96n" podStartSLOduration=2.214294039 podStartE2EDuration="12.769448334s" podCreationTimestamp="2026-03-12 21:27:37 +0000 UTC" firstStartedPulling="2026-03-12 21:27:38.07815939 +0000 UTC m=+1119.152764718" lastFinishedPulling="2026-03-12 21:27:48.633313645 +0000 UTC m=+1129.707919013" observedRunningTime="2026-03-12 21:27:49.767577338 +0000 UTC m=+1130.842182686" watchObservedRunningTime="2026-03-12 21:27:49.769448334 +0000 UTC m=+1130.844053662" Mar 12 21:27:50.211703 master-0 kubenswrapper[31456]: I0312 21:27:50.211651 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:50.211945 master-0 kubenswrapper[31456]: I0312 21:27:50.211785 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:27:50.222012 master-0 kubenswrapper[31456]: I0312 21:27:50.221946 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-internal-api-0" Mar 12 21:27:50.228663 master-0 kubenswrapper[31456]: I0312 21:27:50.228622 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:50.228863 master-0 kubenswrapper[31456]: I0312 21:27:50.228734 31456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 21:27:50.232220 master-0 kubenswrapper[31456]: I0312 21:27:50.232170 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-30e4b-default-external-api-0" Mar 12 21:27:50.962347 master-0 kubenswrapper[31456]: I0312 21:27:50.962287 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 12 21:27:50.962347 master-0 kubenswrapper[31456]: I0312 21:27:50.962346 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 12 21:27:50.988435 master-0 kubenswrapper[31456]: I0312 21:27:50.988350 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 12 21:27:50.989386 master-0 kubenswrapper[31456]: I0312 21:27:50.989343 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 12 21:27:51.690006 master-0 kubenswrapper[31456]: I0312 21:27:51.689946 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 21:27:51.692460 master-0 kubenswrapper[31456]: I0312 21:27:51.692431 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 12 21:28:05.877105 master-0 kubenswrapper[31456]: I0312 21:28:05.877038 31456 generic.go:334] "Generic (PLEG): container finished" podID="f0943c54-38ae-416e-bb08-6921de369d2a" containerID="2aec1413c7217c4f584489189a2de189c8fc83f96c62eb87bfca2c6d0ff171f0" exitCode=0 Mar 12 21:28:05.877105 master-0 kubenswrapper[31456]: I0312 21:28:05.877107 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kn96n" event={"ID":"f0943c54-38ae-416e-bb08-6921de369d2a","Type":"ContainerDied","Data":"2aec1413c7217c4f584489189a2de189c8fc83f96c62eb87bfca2c6d0ff171f0"} Mar 12 21:28:07.377743 master-0 kubenswrapper[31456]: I0312 21:28:07.377671 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:28:07.467542 master-0 kubenswrapper[31456]: I0312 21:28:07.467172 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2ww4\" (UniqueName: \"kubernetes.io/projected/f0943c54-38ae-416e-bb08-6921de369d2a-kube-api-access-p2ww4\") pod \"f0943c54-38ae-416e-bb08-6921de369d2a\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " Mar 12 21:28:07.467542 master-0 kubenswrapper[31456]: I0312 21:28:07.467360 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-combined-ca-bundle\") pod \"f0943c54-38ae-416e-bb08-6921de369d2a\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " Mar 12 21:28:07.468004 master-0 kubenswrapper[31456]: I0312 21:28:07.467567 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-scripts\") pod \"f0943c54-38ae-416e-bb08-6921de369d2a\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " Mar 12 21:28:07.468004 master-0 kubenswrapper[31456]: I0312 21:28:07.467688 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-config-data\") pod \"f0943c54-38ae-416e-bb08-6921de369d2a\" (UID: \"f0943c54-38ae-416e-bb08-6921de369d2a\") " Mar 12 21:28:07.471366 master-0 kubenswrapper[31456]: I0312 21:28:07.471307 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0943c54-38ae-416e-bb08-6921de369d2a-kube-api-access-p2ww4" (OuterVolumeSpecName: "kube-api-access-p2ww4") pod "f0943c54-38ae-416e-bb08-6921de369d2a" (UID: "f0943c54-38ae-416e-bb08-6921de369d2a"). InnerVolumeSpecName "kube-api-access-p2ww4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:07.473368 master-0 kubenswrapper[31456]: I0312 21:28:07.473310 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-scripts" (OuterVolumeSpecName: "scripts") pod "f0943c54-38ae-416e-bb08-6921de369d2a" (UID: "f0943c54-38ae-416e-bb08-6921de369d2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:07.509589 master-0 kubenswrapper[31456]: I0312 21:28:07.509512 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-config-data" (OuterVolumeSpecName: "config-data") pod "f0943c54-38ae-416e-bb08-6921de369d2a" (UID: "f0943c54-38ae-416e-bb08-6921de369d2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:07.517697 master-0 kubenswrapper[31456]: I0312 21:28:07.517634 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0943c54-38ae-416e-bb08-6921de369d2a" (UID: "f0943c54-38ae-416e-bb08-6921de369d2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:07.569972 master-0 kubenswrapper[31456]: I0312 21:28:07.569786 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:07.569972 master-0 kubenswrapper[31456]: I0312 21:28:07.569842 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:07.569972 master-0 kubenswrapper[31456]: I0312 21:28:07.569851 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0943c54-38ae-416e-bb08-6921de369d2a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:07.569972 master-0 kubenswrapper[31456]: I0312 21:28:07.569864 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2ww4\" (UniqueName: \"kubernetes.io/projected/f0943c54-38ae-416e-bb08-6921de369d2a-kube-api-access-p2ww4\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:07.905440 master-0 kubenswrapper[31456]: I0312 21:28:07.905369 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kn96n" event={"ID":"f0943c54-38ae-416e-bb08-6921de369d2a","Type":"ContainerDied","Data":"76edcd0bde2008f8ed29c7e5eba5420e425dfd6df796734dca23cb9e464e48ee"} Mar 12 21:28:07.905440 master-0 kubenswrapper[31456]: I0312 21:28:07.905418 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76edcd0bde2008f8ed29c7e5eba5420e425dfd6df796734dca23cb9e464e48ee" Mar 12 21:28:07.905984 master-0 kubenswrapper[31456]: I0312 21:28:07.905465 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kn96n" Mar 12 21:28:08.105143 master-0 kubenswrapper[31456]: I0312 21:28:08.104554 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 21:28:08.111616 master-0 kubenswrapper[31456]: E0312 21:28:08.111558 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0943c54-38ae-416e-bb08-6921de369d2a" containerName="nova-cell0-conductor-db-sync" Mar 12 21:28:08.113725 master-0 kubenswrapper[31456]: I0312 21:28:08.111890 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0943c54-38ae-416e-bb08-6921de369d2a" containerName="nova-cell0-conductor-db-sync" Mar 12 21:28:08.113725 master-0 kubenswrapper[31456]: I0312 21:28:08.112144 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0943c54-38ae-416e-bb08-6921de369d2a" containerName="nova-cell0-conductor-db-sync" Mar 12 21:28:08.113725 master-0 kubenswrapper[31456]: I0312 21:28:08.112893 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.115338 master-0 kubenswrapper[31456]: I0312 21:28:08.115294 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 21:28:08.120917 master-0 kubenswrapper[31456]: I0312 21:28:08.118715 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 21:28:08.182644 master-0 kubenswrapper[31456]: I0312 21:28:08.182491 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/088fd0cd-8b31-47b4-9373-8827070fc8ee-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.182933 master-0 kubenswrapper[31456]: I0312 21:28:08.182883 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6m9\" (UniqueName: \"kubernetes.io/projected/088fd0cd-8b31-47b4-9373-8827070fc8ee-kube-api-access-cb6m9\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.183220 master-0 kubenswrapper[31456]: I0312 21:28:08.183160 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088fd0cd-8b31-47b4-9373-8827070fc8ee-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.285697 master-0 kubenswrapper[31456]: I0312 21:28:08.285619 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/088fd0cd-8b31-47b4-9373-8827070fc8ee-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.285977 master-0 kubenswrapper[31456]: I0312 21:28:08.285800 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6m9\" (UniqueName: \"kubernetes.io/projected/088fd0cd-8b31-47b4-9373-8827070fc8ee-kube-api-access-cb6m9\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.286348 master-0 kubenswrapper[31456]: I0312 21:28:08.286277 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088fd0cd-8b31-47b4-9373-8827070fc8ee-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.290465 master-0 kubenswrapper[31456]: I0312 21:28:08.290417 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/088fd0cd-8b31-47b4-9373-8827070fc8ee-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.293149 master-0 kubenswrapper[31456]: I0312 21:28:08.293104 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088fd0cd-8b31-47b4-9373-8827070fc8ee-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.301403 master-0 kubenswrapper[31456]: I0312 21:28:08.301344 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6m9\" (UniqueName: \"kubernetes.io/projected/088fd0cd-8b31-47b4-9373-8827070fc8ee-kube-api-access-cb6m9\") pod \"nova-cell0-conductor-0\" (UID: \"088fd0cd-8b31-47b4-9373-8827070fc8ee\") " pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.438673 master-0 kubenswrapper[31456]: I0312 21:28:08.438476 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:08.976925 master-0 kubenswrapper[31456]: I0312 21:28:08.976830 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 21:28:08.987569 master-0 kubenswrapper[31456]: W0312 21:28:08.987508 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod088fd0cd_8b31_47b4_9373_8827070fc8ee.slice/crio-a994f7eb2837d13000fd9b6a32aae5a0c2ea1756a51b8095c0882eccd766188f WatchSource:0}: Error finding container a994f7eb2837d13000fd9b6a32aae5a0c2ea1756a51b8095c0882eccd766188f: Status 404 returned error can't find the container with id a994f7eb2837d13000fd9b6a32aae5a0c2ea1756a51b8095c0882eccd766188f Mar 12 21:28:09.933228 master-0 kubenswrapper[31456]: I0312 21:28:09.933145 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"088fd0cd-8b31-47b4-9373-8827070fc8ee","Type":"ContainerStarted","Data":"4fb524835d368a66ebe411c2a67e2f4920703824cefb447a4063f2cc73fe2a8c"} Mar 12 21:28:09.933866 master-0 kubenswrapper[31456]: I0312 21:28:09.933847 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:09.933946 master-0 kubenswrapper[31456]: I0312 21:28:09.933934 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"088fd0cd-8b31-47b4-9373-8827070fc8ee","Type":"ContainerStarted","Data":"a994f7eb2837d13000fd9b6a32aae5a0c2ea1756a51b8095c0882eccd766188f"} Mar 12 21:28:18.490886 master-0 kubenswrapper[31456]: I0312 21:28:18.490625 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 21:28:18.537999 master-0 kubenswrapper[31456]: I0312 21:28:18.537863 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=10.537832493 podStartE2EDuration="10.537832493s" podCreationTimestamp="2026-03-12 21:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:09.965016216 +0000 UTC m=+1151.039621564" watchObservedRunningTime="2026-03-12 21:28:18.537832493 +0000 UTC m=+1159.612437861" Mar 12 21:28:19.063760 master-0 kubenswrapper[31456]: I0312 21:28:19.063682 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbhd"] Mar 12 21:28:19.065721 master-0 kubenswrapper[31456]: I0312 21:28:19.065676 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.070899 master-0 kubenswrapper[31456]: I0312 21:28:19.070850 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 21:28:19.071205 master-0 kubenswrapper[31456]: I0312 21:28:19.071182 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 21:28:19.089779 master-0 kubenswrapper[31456]: I0312 21:28:19.089709 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbhd"] Mar 12 21:28:19.213556 master-0 kubenswrapper[31456]: I0312 21:28:19.209241 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-config-data\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.213556 master-0 kubenswrapper[31456]: I0312 21:28:19.209298 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-scripts\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.213556 master-0 kubenswrapper[31456]: I0312 21:28:19.209323 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgbpp\" (UniqueName: \"kubernetes.io/projected/7cd86859-a26e-4b51-9c89-175cf23ef2f1-kube-api-access-sgbpp\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.213556 master-0 kubenswrapper[31456]: I0312 21:28:19.209743 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.235830 master-0 kubenswrapper[31456]: I0312 21:28:19.235718 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 21:28:19.242643 master-0 kubenswrapper[31456]: I0312 21:28:19.242565 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.255313 master-0 kubenswrapper[31456]: I0312 21:28:19.252220 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 12 21:28:19.281293 master-0 kubenswrapper[31456]: I0312 21:28:19.281231 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313134 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313200 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72kg\" (UniqueName: \"kubernetes.io/projected/91b65fb0-ac42-43d0-a834-989fac8d4fd5-kube-api-access-k72kg\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313230 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b65fb0-ac42-43d0-a834-989fac8d4fd5-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313271 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b65fb0-ac42-43d0-a834-989fac8d4fd5-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313308 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-config-data\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313327 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-scripts\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.314796 master-0 kubenswrapper[31456]: I0312 21:28:19.313348 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgbpp\" (UniqueName: \"kubernetes.io/projected/7cd86859-a26e-4b51-9c89-175cf23ef2f1-kube-api-access-sgbpp\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.328590 master-0 kubenswrapper[31456]: I0312 21:28:19.325674 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.340834 master-0 kubenswrapper[31456]: I0312 21:28:19.340591 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:19.361784 master-0 kubenswrapper[31456]: I0312 21:28:19.342670 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:19.361784 master-0 kubenswrapper[31456]: I0312 21:28:19.343538 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-config-data\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.361784 master-0 kubenswrapper[31456]: I0312 21:28:19.360964 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 21:28:19.361784 master-0 kubenswrapper[31456]: I0312 21:28:19.361404 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-scripts\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.389525 master-0 kubenswrapper[31456]: I0312 21:28:19.380644 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:19.392044 master-0 kubenswrapper[31456]: I0312 21:28:19.390934 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:28:19.410832 master-0 kubenswrapper[31456]: I0312 21:28:19.405288 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgbpp\" (UniqueName: \"kubernetes.io/projected/7cd86859-a26e-4b51-9c89-175cf23ef2f1-kube-api-access-sgbpp\") pod \"nova-cell0-cell-mapping-fjbhd\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.410832 master-0 kubenswrapper[31456]: I0312 21:28:19.405984 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 21:28:19.421826 master-0 kubenswrapper[31456]: I0312 21:28:19.419673 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:19.421826 master-0 kubenswrapper[31456]: I0312 21:28:19.420043 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k72kg\" (UniqueName: \"kubernetes.io/projected/91b65fb0-ac42-43d0-a834-989fac8d4fd5-kube-api-access-k72kg\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.421826 master-0 kubenswrapper[31456]: I0312 21:28:19.420095 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b65fb0-ac42-43d0-a834-989fac8d4fd5-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.421826 master-0 kubenswrapper[31456]: I0312 21:28:19.420146 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b65fb0-ac42-43d0-a834-989fac8d4fd5-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.426387 master-0 kubenswrapper[31456]: I0312 21:28:19.426312 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b65fb0-ac42-43d0-a834-989fac8d4fd5-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.462294 master-0 kubenswrapper[31456]: I0312 21:28:19.457610 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:19.462294 master-0 kubenswrapper[31456]: I0312 21:28:19.458452 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b65fb0-ac42-43d0-a834-989fac8d4fd5-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.462294 master-0 kubenswrapper[31456]: I0312 21:28:19.460659 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72kg\" (UniqueName: \"kubernetes.io/projected/91b65fb0-ac42-43d0-a834-989fac8d4fd5-kube-api-access-k72kg\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"91b65fb0-ac42-43d0-a834-989fac8d4fd5\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.475823 master-0 kubenswrapper[31456]: I0312 21:28:19.472877 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:19.507959 master-0 kubenswrapper[31456]: I0312 21:28:19.502945 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:19.512880 master-0 kubenswrapper[31456]: I0312 21:28:19.510609 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.518361 master-0 kubenswrapper[31456]: I0312 21:28:19.518111 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.524817 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-config-data\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525041 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525074 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525120 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27c2a30e-8258-424f-8896-28a1fa0ebd1d-logs\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525150 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs6q2\" (UniqueName: \"kubernetes.io/projected/27c2a30e-8258-424f-8896-28a1fa0ebd1d-kube-api-access-zs6q2\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525171 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfh6\" (UniqueName: \"kubernetes.io/projected/4947333f-6917-4b79-830e-171f682e0309-kube-api-access-jgfh6\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525189 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-config-data\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.525760 master-0 kubenswrapper[31456]: I0312 21:28:19.525283 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4947333f-6917-4b79-830e-171f682e0309-logs\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.616908 master-0 kubenswrapper[31456]: I0312 21:28:19.616836 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:19.630861 master-0 kubenswrapper[31456]: I0312 21:28:19.630797 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4947333f-6917-4b79-830e-171f682e0309-logs\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.631050 master-0 kubenswrapper[31456]: I0312 21:28:19.631035 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-config-data\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.631195 master-0 kubenswrapper[31456]: I0312 21:28:19.631181 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.631299 master-0 kubenswrapper[31456]: I0312 21:28:19.631286 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.631446 master-0 kubenswrapper[31456]: I0312 21:28:19.631433 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27c2a30e-8258-424f-8896-28a1fa0ebd1d-logs\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.631582 master-0 kubenswrapper[31456]: I0312 21:28:19.631567 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klrn7\" (UniqueName: \"kubernetes.io/projected/905901a2-2e45-48ea-bedb-0712d96114ff-kube-api-access-klrn7\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.631697 master-0 kubenswrapper[31456]: I0312 21:28:19.631683 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs6q2\" (UniqueName: \"kubernetes.io/projected/27c2a30e-8258-424f-8896-28a1fa0ebd1d-kube-api-access-zs6q2\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.632051 master-0 kubenswrapper[31456]: I0312 21:28:19.631800 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfh6\" (UniqueName: \"kubernetes.io/projected/4947333f-6917-4b79-830e-171f682e0309-kube-api-access-jgfh6\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.632646 master-0 kubenswrapper[31456]: I0312 21:28:19.632631 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-config-data\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.632780 master-0 kubenswrapper[31456]: I0312 21:28:19.632766 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.633147 master-0 kubenswrapper[31456]: I0312 21:28:19.633132 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.633570 master-0 kubenswrapper[31456]: I0312 21:28:19.632388 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27c2a30e-8258-424f-8896-28a1fa0ebd1d-logs\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.636991 master-0 kubenswrapper[31456]: I0312 21:28:19.636973 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-config-data\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.637379 master-0 kubenswrapper[31456]: I0312 21:28:19.637363 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4947333f-6917-4b79-830e-171f682e0309-logs\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.638872 master-0 kubenswrapper[31456]: I0312 21:28:19.638854 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:19.641958 master-0 kubenswrapper[31456]: I0312 21:28:19.641678 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-config-data\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.643775 master-0 kubenswrapper[31456]: I0312 21:28:19.643741 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.679102 master-0 kubenswrapper[31456]: I0312 21:28:19.664527 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.696062 master-0 kubenswrapper[31456]: I0312 21:28:19.690599 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfh6\" (UniqueName: \"kubernetes.io/projected/4947333f-6917-4b79-830e-171f682e0309-kube-api-access-jgfh6\") pod \"nova-api-0\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " pod="openstack/nova-api-0" Mar 12 21:28:19.696062 master-0 kubenswrapper[31456]: I0312 21:28:19.690681 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:19.696062 master-0 kubenswrapper[31456]: I0312 21:28:19.692279 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:19.696062 master-0 kubenswrapper[31456]: I0312 21:28:19.692357 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:28:19.696062 master-0 kubenswrapper[31456]: I0312 21:28:19.693404 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:28:19.696062 master-0 kubenswrapper[31456]: I0312 21:28:19.695986 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 21:28:19.702684 master-0 kubenswrapper[31456]: I0312 21:28:19.701408 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs6q2\" (UniqueName: \"kubernetes.io/projected/27c2a30e-8258-424f-8896-28a1fa0ebd1d-kube-api-access-zs6q2\") pod \"nova-metadata-0\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " pod="openstack/nova-metadata-0" Mar 12 21:28:19.707962 master-0 kubenswrapper[31456]: I0312 21:28:19.707912 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76bffd747-5b96l"] Mar 12 21:28:19.710197 master-0 kubenswrapper[31456]: I0312 21:28:19.710163 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.736344 master-0 kubenswrapper[31456]: I0312 21:28:19.736301 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.736570 master-0 kubenswrapper[31456]: I0312 21:28:19.736364 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-nb\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.736570 master-0 kubenswrapper[31456]: I0312 21:28:19.736418 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klrn7\" (UniqueName: \"kubernetes.io/projected/905901a2-2e45-48ea-bedb-0712d96114ff-kube-api-access-klrn7\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.736570 master-0 kubenswrapper[31456]: I0312 21:28:19.736436 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-svc\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.736570 master-0 kubenswrapper[31456]: I0312 21:28:19.736504 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-swift-storage-0\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.736570 master-0 kubenswrapper[31456]: I0312 21:28:19.736553 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.736724 master-0 kubenswrapper[31456]: I0312 21:28:19.736595 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqgh\" (UniqueName: \"kubernetes.io/projected/1e689ffe-338d-4b20-a02e-6819b05cf05d-kube-api-access-xsqgh\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.736724 master-0 kubenswrapper[31456]: I0312 21:28:19.736620 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-config-data\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.736724 master-0 kubenswrapper[31456]: I0312 21:28:19.736650 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.736857 master-0 kubenswrapper[31456]: I0312 21:28:19.736733 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-config\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.736857 master-0 kubenswrapper[31456]: I0312 21:28:19.736768 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-sb\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.736857 master-0 kubenswrapper[31456]: I0312 21:28:19.736836 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9ckr\" (UniqueName: \"kubernetes.io/projected/d93e5d01-b4a5-4612-bded-2615337961dc-kube-api-access-m9ckr\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.744526 master-0 kubenswrapper[31456]: I0312 21:28:19.744437 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76bffd747-5b96l"] Mar 12 21:28:19.746468 master-0 kubenswrapper[31456]: I0312 21:28:19.746431 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.746630 master-0 kubenswrapper[31456]: I0312 21:28:19.746435 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.806133 master-0 kubenswrapper[31456]: I0312 21:28:19.805656 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klrn7\" (UniqueName: \"kubernetes.io/projected/905901a2-2e45-48ea-bedb-0712d96114ff-kube-api-access-klrn7\") pod \"nova-cell1-novncproxy-0\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:19.853845 master-0 kubenswrapper[31456]: I0312 21:28:19.853675 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-svc\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.856222 master-0 kubenswrapper[31456]: I0312 21:28:19.856113 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-svc\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.857048 master-0 kubenswrapper[31456]: I0312 21:28:19.856401 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-swift-storage-0\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.857297 master-0 kubenswrapper[31456]: I0312 21:28:19.857281 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsqgh\" (UniqueName: \"kubernetes.io/projected/1e689ffe-338d-4b20-a02e-6819b05cf05d-kube-api-access-xsqgh\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.857411 master-0 kubenswrapper[31456]: I0312 21:28:19.857397 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-config-data\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.857726 master-0 kubenswrapper[31456]: I0312 21:28:19.857713 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-config\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.857896 master-0 kubenswrapper[31456]: I0312 21:28:19.857883 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-sb\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.860234 master-0 kubenswrapper[31456]: I0312 21:28:19.860201 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9ckr\" (UniqueName: \"kubernetes.io/projected/d93e5d01-b4a5-4612-bded-2615337961dc-kube-api-access-m9ckr\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.860392 master-0 kubenswrapper[31456]: I0312 21:28:19.860376 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.860515 master-0 kubenswrapper[31456]: I0312 21:28:19.860503 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-nb\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.861903 master-0 kubenswrapper[31456]: I0312 21:28:19.858728 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-sb\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.861903 master-0 kubenswrapper[31456]: I0312 21:28:19.856987 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-swift-storage-0\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.862575 master-0 kubenswrapper[31456]: I0312 21:28:19.862543 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-config\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.866955 master-0 kubenswrapper[31456]: I0312 21:28:19.866919 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.868087 master-0 kubenswrapper[31456]: I0312 21:28:19.868005 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-nb\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.873308 master-0 kubenswrapper[31456]: I0312 21:28:19.873274 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsqgh\" (UniqueName: \"kubernetes.io/projected/1e689ffe-338d-4b20-a02e-6819b05cf05d-kube-api-access-xsqgh\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.879665 master-0 kubenswrapper[31456]: I0312 21:28:19.879483 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-config-data\") pod \"nova-scheduler-0\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:19.890781 master-0 kubenswrapper[31456]: I0312 21:28:19.890709 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9ckr\" (UniqueName: \"kubernetes.io/projected/d93e5d01-b4a5-4612-bded-2615337961dc-kube-api-access-m9ckr\") pod \"dnsmasq-dns-76bffd747-5b96l\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:19.947305 master-0 kubenswrapper[31456]: I0312 21:28:19.945602 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:20.010261 master-0 kubenswrapper[31456]: I0312 21:28:20.009943 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:20.025291 master-0 kubenswrapper[31456]: I0312 21:28:20.024543 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:28:20.034661 master-0 kubenswrapper[31456]: I0312 21:28:20.034050 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:20.499236 master-0 kubenswrapper[31456]: I0312 21:28:20.498053 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fjbhd"] Mar 12 21:28:20.666824 master-0 kubenswrapper[31456]: I0312 21:28:20.666699 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 12 21:28:20.774338 master-0 kubenswrapper[31456]: I0312 21:28:20.766370 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:20.794480 master-0 kubenswrapper[31456]: W0312 21:28:20.794076 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4947333f_6917_4b79_830e_171f682e0309.slice/crio-363ff73f2ac9ba41f3de67cfd37b849e62904898ade8febda67c843536fc6e4b WatchSource:0}: Error finding container 363ff73f2ac9ba41f3de67cfd37b849e62904898ade8febda67c843536fc6e4b: Status 404 returned error can't find the container with id 363ff73f2ac9ba41f3de67cfd37b849e62904898ade8febda67c843536fc6e4b Mar 12 21:28:20.810208 master-0 kubenswrapper[31456]: I0312 21:28:20.800337 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:20.816619 master-0 kubenswrapper[31456]: I0312 21:28:20.816561 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:20.826357 master-0 kubenswrapper[31456]: W0312 21:28:20.826312 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27c2a30e_8258_424f_8896_28a1fa0ebd1d.slice/crio-f2141a2c54e4e3adcfcb23fb555e4d4314edb94e34553d2c9feaa111163f3b75 WatchSource:0}: Error finding container f2141a2c54e4e3adcfcb23fb555e4d4314edb94e34553d2c9feaa111163f3b75: Status 404 returned error can't find the container with id f2141a2c54e4e3adcfcb23fb555e4d4314edb94e34553d2c9feaa111163f3b75 Mar 12 21:28:20.967003 master-0 kubenswrapper[31456]: I0312 21:28:20.960266 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hnj5b"] Mar 12 21:28:20.967952 master-0 kubenswrapper[31456]: I0312 21:28:20.967691 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:20.971729 master-0 kubenswrapper[31456]: I0312 21:28:20.970872 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 21:28:20.972652 master-0 kubenswrapper[31456]: I0312 21:28:20.972591 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 21:28:20.976745 master-0 kubenswrapper[31456]: I0312 21:28:20.974229 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hnj5b"] Mar 12 21:28:21.006117 master-0 kubenswrapper[31456]: I0312 21:28:21.006057 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-scripts\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.006331 master-0 kubenswrapper[31456]: I0312 21:28:21.006158 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdpn\" (UniqueName: \"kubernetes.io/projected/c59d7ee2-3288-42f9-9202-abedc026040d-kube-api-access-xqdpn\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.006331 master-0 kubenswrapper[31456]: I0312 21:28:21.006230 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-config-data\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.006331 master-0 kubenswrapper[31456]: I0312 21:28:21.006299 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.061162 master-0 kubenswrapper[31456]: I0312 21:28:21.061104 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:21.078001 master-0 kubenswrapper[31456]: I0312 21:28:21.077934 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76bffd747-5b96l"] Mar 12 21:28:21.109338 master-0 kubenswrapper[31456]: I0312 21:28:21.108269 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-config-data\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.109338 master-0 kubenswrapper[31456]: I0312 21:28:21.108368 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.109338 master-0 kubenswrapper[31456]: I0312 21:28:21.108446 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-scripts\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.109338 master-0 kubenswrapper[31456]: I0312 21:28:21.108517 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdpn\" (UniqueName: \"kubernetes.io/projected/c59d7ee2-3288-42f9-9202-abedc026040d-kube-api-access-xqdpn\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.115864 master-0 kubenswrapper[31456]: I0312 21:28:21.115162 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.116657 master-0 kubenswrapper[31456]: I0312 21:28:21.116586 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-scripts\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.128876 master-0 kubenswrapper[31456]: I0312 21:28:21.128483 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-config-data\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.131422 master-0 kubenswrapper[31456]: I0312 21:28:21.131380 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdpn\" (UniqueName: \"kubernetes.io/projected/c59d7ee2-3288-42f9-9202-abedc026040d-kube-api-access-xqdpn\") pod \"nova-cell1-conductor-db-sync-hnj5b\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.188258 master-0 kubenswrapper[31456]: I0312 21:28:21.188001 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"91b65fb0-ac42-43d0-a834-989fac8d4fd5","Type":"ContainerStarted","Data":"b624cfb44dc33c1da43df3154659e2abdf761851c96713472b26d43b0f4988ac"} Mar 12 21:28:21.188258 master-0 kubenswrapper[31456]: I0312 21:28:21.188042 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27c2a30e-8258-424f-8896-28a1fa0ebd1d","Type":"ContainerStarted","Data":"f2141a2c54e4e3adcfcb23fb555e4d4314edb94e34553d2c9feaa111163f3b75"} Mar 12 21:28:21.188258 master-0 kubenswrapper[31456]: I0312 21:28:21.188052 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e689ffe-338d-4b20-a02e-6819b05cf05d","Type":"ContainerStarted","Data":"00be4a85f0ff9ce1b4566bc542c200ebcc8cd364d15eff33463e3e7b133391cc"} Mar 12 21:28:21.188258 master-0 kubenswrapper[31456]: I0312 21:28:21.188062 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bffd747-5b96l" event={"ID":"d93e5d01-b4a5-4612-bded-2615337961dc","Type":"ContainerStarted","Data":"af18daabc6ba0586c4572d464a511c5cedc36f7aff7b36816a6b6af6542604e2"} Mar 12 21:28:21.197995 master-0 kubenswrapper[31456]: I0312 21:28:21.197952 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbhd" event={"ID":"7cd86859-a26e-4b51-9c89-175cf23ef2f1","Type":"ContainerStarted","Data":"8180fbd11113120a095e440d6f4fdf495d92b426aa049996fff558596e39fa21"} Mar 12 21:28:21.198083 master-0 kubenswrapper[31456]: I0312 21:28:21.197993 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbhd" event={"ID":"7cd86859-a26e-4b51-9c89-175cf23ef2f1","Type":"ContainerStarted","Data":"13e0c5cad4aed5da3a73886a0bfcce30cceb0f324e9ac0bfaa23bc8cf9f3ca77"} Mar 12 21:28:21.203710 master-0 kubenswrapper[31456]: I0312 21:28:21.203591 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"905901a2-2e45-48ea-bedb-0712d96114ff","Type":"ContainerStarted","Data":"ab3fe806af8da8cb17e5a160ab7a08d8dea1d3b145b35109ab3aff16be4d5a33"} Mar 12 21:28:21.205075 master-0 kubenswrapper[31456]: I0312 21:28:21.205002 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4947333f-6917-4b79-830e-171f682e0309","Type":"ContainerStarted","Data":"363ff73f2ac9ba41f3de67cfd37b849e62904898ade8febda67c843536fc6e4b"} Mar 12 21:28:21.233606 master-0 kubenswrapper[31456]: I0312 21:28:21.233518 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fjbhd" podStartSLOduration=2.233494746 podStartE2EDuration="2.233494746s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:21.220666006 +0000 UTC m=+1162.295271334" watchObservedRunningTime="2026-03-12 21:28:21.233494746 +0000 UTC m=+1162.308100074" Mar 12 21:28:21.295826 master-0 kubenswrapper[31456]: I0312 21:28:21.295272 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:21.838841 master-0 kubenswrapper[31456]: W0312 21:28:21.834963 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc59d7ee2_3288_42f9_9202_abedc026040d.slice/crio-aec8f7704f92963e0fd53d219475e755fc98b7f168916678b530e28e86e15c3f WatchSource:0}: Error finding container aec8f7704f92963e0fd53d219475e755fc98b7f168916678b530e28e86e15c3f: Status 404 returned error can't find the container with id aec8f7704f92963e0fd53d219475e755fc98b7f168916678b530e28e86e15c3f Mar 12 21:28:21.849970 master-0 kubenswrapper[31456]: I0312 21:28:21.849841 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hnj5b"] Mar 12 21:28:22.229883 master-0 kubenswrapper[31456]: I0312 21:28:22.222609 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" event={"ID":"c59d7ee2-3288-42f9-9202-abedc026040d","Type":"ContainerStarted","Data":"725acccafd28a1ddf7b25fe2a562bf0fadce02f520cadf660f3a796c6757787f"} Mar 12 21:28:22.229883 master-0 kubenswrapper[31456]: I0312 21:28:22.222669 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" event={"ID":"c59d7ee2-3288-42f9-9202-abedc026040d","Type":"ContainerStarted","Data":"aec8f7704f92963e0fd53d219475e755fc98b7f168916678b530e28e86e15c3f"} Mar 12 21:28:22.233203 master-0 kubenswrapper[31456]: I0312 21:28:22.233155 31456 generic.go:334] "Generic (PLEG): container finished" podID="d93e5d01-b4a5-4612-bded-2615337961dc" containerID="fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd" exitCode=0 Mar 12 21:28:22.235488 master-0 kubenswrapper[31456]: I0312 21:28:22.235373 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bffd747-5b96l" event={"ID":"d93e5d01-b4a5-4612-bded-2615337961dc","Type":"ContainerDied","Data":"fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd"} Mar 12 21:28:22.298847 master-0 kubenswrapper[31456]: I0312 21:28:22.297364 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" podStartSLOduration=2.297337104 podStartE2EDuration="2.297337104s" podCreationTimestamp="2026-03-12 21:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:22.247066188 +0000 UTC m=+1163.321671516" watchObservedRunningTime="2026-03-12 21:28:22.297337104 +0000 UTC m=+1163.371942442" Mar 12 21:28:23.830921 master-0 kubenswrapper[31456]: I0312 21:28:23.828455 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:23.860183 master-0 kubenswrapper[31456]: I0312 21:28:23.860124 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:25.271217 master-0 kubenswrapper[31456]: I0312 21:28:25.270374 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"905901a2-2e45-48ea-bedb-0712d96114ff","Type":"ContainerStarted","Data":"9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7"} Mar 12 21:28:25.271217 master-0 kubenswrapper[31456]: I0312 21:28:25.270564 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="905901a2-2e45-48ea-bedb-0712d96114ff" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7" gracePeriod=30 Mar 12 21:28:25.276734 master-0 kubenswrapper[31456]: I0312 21:28:25.276675 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4947333f-6917-4b79-830e-171f682e0309","Type":"ContainerStarted","Data":"db98bee1bcf9804748089488ebc128f3520f410758576e43ef795429c434eee7"} Mar 12 21:28:25.276734 master-0 kubenswrapper[31456]: I0312 21:28:25.276733 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4947333f-6917-4b79-830e-171f682e0309","Type":"ContainerStarted","Data":"3b7e4d1a5b83b8d16214618d2bc1bf47d9a2ee5baa56bbf1dd86d7081e40187e"} Mar 12 21:28:25.285039 master-0 kubenswrapper[31456]: I0312 21:28:25.284979 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27c2a30e-8258-424f-8896-28a1fa0ebd1d","Type":"ContainerStarted","Data":"c92add9e45021b674eea91ee6e354738ace95dceb3ff9062ea14d5035758af7d"} Mar 12 21:28:25.285113 master-0 kubenswrapper[31456]: I0312 21:28:25.285044 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27c2a30e-8258-424f-8896-28a1fa0ebd1d","Type":"ContainerStarted","Data":"ce31a2cdd0643902b11fc2aaccdcb3c2a38b66121f660a5cd2190c2c2dd9713e"} Mar 12 21:28:25.285248 master-0 kubenswrapper[31456]: I0312 21:28:25.285207 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-log" containerID="cri-o://ce31a2cdd0643902b11fc2aaccdcb3c2a38b66121f660a5cd2190c2c2dd9713e" gracePeriod=30 Mar 12 21:28:25.285376 master-0 kubenswrapper[31456]: I0312 21:28:25.285343 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-metadata" containerID="cri-o://c92add9e45021b674eea91ee6e354738ace95dceb3ff9062ea14d5035758af7d" gracePeriod=30 Mar 12 21:28:25.288290 master-0 kubenswrapper[31456]: I0312 21:28:25.288239 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e689ffe-338d-4b20-a02e-6819b05cf05d","Type":"ContainerStarted","Data":"cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e"} Mar 12 21:28:25.292518 master-0 kubenswrapper[31456]: I0312 21:28:25.292044 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bffd747-5b96l" event={"ID":"d93e5d01-b4a5-4612-bded-2615337961dc","Type":"ContainerStarted","Data":"a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531"} Mar 12 21:28:25.293171 master-0 kubenswrapper[31456]: I0312 21:28:25.293139 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:25.397888 master-0 kubenswrapper[31456]: I0312 21:28:25.397754 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.16040903 podStartE2EDuration="6.397734083s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="2026-03-12 21:28:21.07996449 +0000 UTC m=+1162.154569818" lastFinishedPulling="2026-03-12 21:28:24.317289543 +0000 UTC m=+1165.391894871" observedRunningTime="2026-03-12 21:28:25.387659009 +0000 UTC m=+1166.462264337" watchObservedRunningTime="2026-03-12 21:28:25.397734083 +0000 UTC m=+1166.472339421" Mar 12 21:28:25.886179 master-0 kubenswrapper[31456]: I0312 21:28:25.885754 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76bffd747-5b96l" podStartSLOduration=6.885728283 podStartE2EDuration="6.885728283s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:25.879240647 +0000 UTC m=+1166.953845975" watchObservedRunningTime="2026-03-12 21:28:25.885728283 +0000 UTC m=+1166.960333621" Mar 12 21:28:26.308223 master-0 kubenswrapper[31456]: I0312 21:28:26.306360 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.816138952 podStartE2EDuration="7.306344955s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="2026-03-12 21:28:20.829753924 +0000 UTC m=+1161.904359262" lastFinishedPulling="2026-03-12 21:28:24.319959947 +0000 UTC m=+1165.394565265" observedRunningTime="2026-03-12 21:28:26.30577337 +0000 UTC m=+1167.380378698" watchObservedRunningTime="2026-03-12 21:28:26.306344955 +0000 UTC m=+1167.380950283" Mar 12 21:28:26.351848 master-0 kubenswrapper[31456]: I0312 21:28:26.351239 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.8568186349999998 podStartE2EDuration="7.3512182s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="2026-03-12 21:28:20.81305399 +0000 UTC m=+1161.887659318" lastFinishedPulling="2026-03-12 21:28:24.307453555 +0000 UTC m=+1165.382058883" observedRunningTime="2026-03-12 21:28:26.334472255 +0000 UTC m=+1167.409077583" watchObservedRunningTime="2026-03-12 21:28:26.3512182 +0000 UTC m=+1167.425823528" Mar 12 21:28:26.356993 master-0 kubenswrapper[31456]: I0312 21:28:26.356934 31456 generic.go:334] "Generic (PLEG): container finished" podID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerID="c92add9e45021b674eea91ee6e354738ace95dceb3ff9062ea14d5035758af7d" exitCode=0 Mar 12 21:28:26.356993 master-0 kubenswrapper[31456]: I0312 21:28:26.356977 31456 generic.go:334] "Generic (PLEG): container finished" podID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerID="ce31a2cdd0643902b11fc2aaccdcb3c2a38b66121f660a5cd2190c2c2dd9713e" exitCode=143 Mar 12 21:28:26.357162 master-0 kubenswrapper[31456]: I0312 21:28:26.357038 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27c2a30e-8258-424f-8896-28a1fa0ebd1d","Type":"ContainerDied","Data":"c92add9e45021b674eea91ee6e354738ace95dceb3ff9062ea14d5035758af7d"} Mar 12 21:28:26.357162 master-0 kubenswrapper[31456]: I0312 21:28:26.357116 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27c2a30e-8258-424f-8896-28a1fa0ebd1d","Type":"ContainerDied","Data":"ce31a2cdd0643902b11fc2aaccdcb3c2a38b66121f660a5cd2190c2c2dd9713e"} Mar 12 21:28:26.425902 master-0 kubenswrapper[31456]: I0312 21:28:26.425704 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.929680669 podStartE2EDuration="7.425678542s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="2026-03-12 21:28:20.811486802 +0000 UTC m=+1161.886092130" lastFinishedPulling="2026-03-12 21:28:24.307484655 +0000 UTC m=+1165.382090003" observedRunningTime="2026-03-12 21:28:26.412118884 +0000 UTC m=+1167.486724212" watchObservedRunningTime="2026-03-12 21:28:26.425678542 +0000 UTC m=+1167.500283870" Mar 12 21:28:26.793833 master-0 kubenswrapper[31456]: I0312 21:28:26.791416 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:26.969895 master-0 kubenswrapper[31456]: I0312 21:28:26.969079 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs6q2\" (UniqueName: \"kubernetes.io/projected/27c2a30e-8258-424f-8896-28a1fa0ebd1d-kube-api-access-zs6q2\") pod \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " Mar 12 21:28:26.969895 master-0 kubenswrapper[31456]: I0312 21:28:26.969204 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-combined-ca-bundle\") pod \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " Mar 12 21:28:26.969895 master-0 kubenswrapper[31456]: I0312 21:28:26.969381 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-config-data\") pod \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " Mar 12 21:28:26.969895 master-0 kubenswrapper[31456]: I0312 21:28:26.969413 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27c2a30e-8258-424f-8896-28a1fa0ebd1d-logs\") pod \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\" (UID: \"27c2a30e-8258-424f-8896-28a1fa0ebd1d\") " Mar 12 21:28:26.970192 master-0 kubenswrapper[31456]: I0312 21:28:26.970169 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c2a30e-8258-424f-8896-28a1fa0ebd1d-logs" (OuterVolumeSpecName: "logs") pod "27c2a30e-8258-424f-8896-28a1fa0ebd1d" (UID: "27c2a30e-8258-424f-8896-28a1fa0ebd1d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:28:26.979823 master-0 kubenswrapper[31456]: I0312 21:28:26.972872 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c2a30e-8258-424f-8896-28a1fa0ebd1d-kube-api-access-zs6q2" (OuterVolumeSpecName: "kube-api-access-zs6q2") pod "27c2a30e-8258-424f-8896-28a1fa0ebd1d" (UID: "27c2a30e-8258-424f-8896-28a1fa0ebd1d"). InnerVolumeSpecName "kube-api-access-zs6q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:27.004829 master-0 kubenswrapper[31456]: I0312 21:28:27.004144 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27c2a30e-8258-424f-8896-28a1fa0ebd1d" (UID: "27c2a30e-8258-424f-8896-28a1fa0ebd1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:27.011920 master-0 kubenswrapper[31456]: I0312 21:28:27.007016 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-config-data" (OuterVolumeSpecName: "config-data") pod "27c2a30e-8258-424f-8896-28a1fa0ebd1d" (UID: "27c2a30e-8258-424f-8896-28a1fa0ebd1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:27.073572 master-0 kubenswrapper[31456]: I0312 21:28:27.073300 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:27.073572 master-0 kubenswrapper[31456]: I0312 21:28:27.073336 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27c2a30e-8258-424f-8896-28a1fa0ebd1d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:27.073572 master-0 kubenswrapper[31456]: I0312 21:28:27.073348 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27c2a30e-8258-424f-8896-28a1fa0ebd1d-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:27.073572 master-0 kubenswrapper[31456]: I0312 21:28:27.073356 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs6q2\" (UniqueName: \"kubernetes.io/projected/27c2a30e-8258-424f-8896-28a1fa0ebd1d-kube-api-access-zs6q2\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:27.371869 master-0 kubenswrapper[31456]: I0312 21:28:27.371264 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:27.372351 master-0 kubenswrapper[31456]: I0312 21:28:27.372110 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"27c2a30e-8258-424f-8896-28a1fa0ebd1d","Type":"ContainerDied","Data":"f2141a2c54e4e3adcfcb23fb555e4d4314edb94e34553d2c9feaa111163f3b75"} Mar 12 21:28:27.372351 master-0 kubenswrapper[31456]: I0312 21:28:27.372146 31456 scope.go:117] "RemoveContainer" containerID="c92add9e45021b674eea91ee6e354738ace95dceb3ff9062ea14d5035758af7d" Mar 12 21:28:27.499146 master-0 kubenswrapper[31456]: I0312 21:28:27.479134 31456 scope.go:117] "RemoveContainer" containerID="ce31a2cdd0643902b11fc2aaccdcb3c2a38b66121f660a5cd2190c2c2dd9713e" Mar 12 21:28:27.499146 master-0 kubenswrapper[31456]: I0312 21:28:27.488126 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:27.543972 master-0 kubenswrapper[31456]: I0312 21:28:27.543896 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:27.568268 master-0 kubenswrapper[31456]: I0312 21:28:27.568201 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:27.568978 master-0 kubenswrapper[31456]: E0312 21:28:27.568726 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-log" Mar 12 21:28:27.568978 master-0 kubenswrapper[31456]: I0312 21:28:27.568742 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-log" Mar 12 21:28:27.568978 master-0 kubenswrapper[31456]: E0312 21:28:27.568762 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-metadata" Mar 12 21:28:27.568978 master-0 kubenswrapper[31456]: I0312 21:28:27.568768 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-metadata" Mar 12 21:28:27.569272 master-0 kubenswrapper[31456]: I0312 21:28:27.569086 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-log" Mar 12 21:28:27.569272 master-0 kubenswrapper[31456]: I0312 21:28:27.569113 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" containerName="nova-metadata-metadata" Mar 12 21:28:27.570789 master-0 kubenswrapper[31456]: I0312 21:28:27.570675 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:27.577411 master-0 kubenswrapper[31456]: E0312 21:28:27.573570 31456 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93110548_5710_4149_bd72_8e42693c948e.slice/crio-04cebfa9ee3ae27945dc4f288c27a010c11a036298a87f570271091e7449a2c5.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:28:27.579253 master-0 kubenswrapper[31456]: I0312 21:28:27.578280 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 21:28:27.579253 master-0 kubenswrapper[31456]: I0312 21:28:27.578519 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 21:28:27.586080 master-0 kubenswrapper[31456]: I0312 21:28:27.585895 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsqm\" (UniqueName: \"kubernetes.io/projected/aa7b89ff-9555-485b-af52-9624240b80b4-kube-api-access-ktsqm\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.587313 master-0 kubenswrapper[31456]: I0312 21:28:27.587134 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.587313 master-0 kubenswrapper[31456]: I0312 21:28:27.587195 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-config-data\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.589885 master-0 kubenswrapper[31456]: I0312 21:28:27.587523 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.589885 master-0 kubenswrapper[31456]: I0312 21:28:27.587673 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa7b89ff-9555-485b-af52-9624240b80b4-logs\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.599160 master-0 kubenswrapper[31456]: I0312 21:28:27.599040 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:27.688935 master-0 kubenswrapper[31456]: I0312 21:28:27.688880 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa7b89ff-9555-485b-af52-9624240b80b4-logs\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.689145 master-0 kubenswrapper[31456]: I0312 21:28:27.689006 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsqm\" (UniqueName: \"kubernetes.io/projected/aa7b89ff-9555-485b-af52-9624240b80b4-kube-api-access-ktsqm\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.689145 master-0 kubenswrapper[31456]: I0312 21:28:27.689051 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.689145 master-0 kubenswrapper[31456]: I0312 21:28:27.689066 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-config-data\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.689145 master-0 kubenswrapper[31456]: I0312 21:28:27.689138 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.689477 master-0 kubenswrapper[31456]: I0312 21:28:27.689445 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa7b89ff-9555-485b-af52-9624240b80b4-logs\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.692928 master-0 kubenswrapper[31456]: I0312 21:28:27.692783 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.693873 master-0 kubenswrapper[31456]: I0312 21:28:27.693832 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.695172 master-0 kubenswrapper[31456]: I0312 21:28:27.695125 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-config-data\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.712642 master-0 kubenswrapper[31456]: I0312 21:28:27.705902 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsqm\" (UniqueName: \"kubernetes.io/projected/aa7b89ff-9555-485b-af52-9624240b80b4-kube-api-access-ktsqm\") pod \"nova-metadata-0\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " pod="openstack/nova-metadata-0" Mar 12 21:28:27.897460 master-0 kubenswrapper[31456]: I0312 21:28:27.897405 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:28.371206 master-0 kubenswrapper[31456]: I0312 21:28:28.371145 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:28.388048 master-0 kubenswrapper[31456]: I0312 21:28:28.387978 31456 generic.go:334] "Generic (PLEG): container finished" podID="93110548-5710-4149-bd72-8e42693c948e" containerID="04cebfa9ee3ae27945dc4f288c27a010c11a036298a87f570271091e7449a2c5" exitCode=0 Mar 12 21:28:28.388048 master-0 kubenswrapper[31456]: I0312 21:28:28.388047 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerDied","Data":"04cebfa9ee3ae27945dc4f288c27a010c11a036298a87f570271091e7449a2c5"} Mar 12 21:28:29.190383 master-0 kubenswrapper[31456]: I0312 21:28:29.190259 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c2a30e-8258-424f-8896-28a1fa0ebd1d" path="/var/lib/kubelet/pods/27c2a30e-8258-424f-8896-28a1fa0ebd1d/volumes" Mar 12 21:28:29.694945 master-0 kubenswrapper[31456]: I0312 21:28:29.694745 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 21:28:29.696111 master-0 kubenswrapper[31456]: I0312 21:28:29.696080 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 21:28:30.011091 master-0 kubenswrapper[31456]: I0312 21:28:30.010984 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:30.025563 master-0 kubenswrapper[31456]: I0312 21:28:30.025459 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 21:28:30.025563 master-0 kubenswrapper[31456]: I0312 21:28:30.025535 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 21:28:30.036039 master-0 kubenswrapper[31456]: I0312 21:28:30.035991 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:28:30.058651 master-0 kubenswrapper[31456]: I0312 21:28:30.058568 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 21:28:30.269410 master-0 kubenswrapper[31456]: I0312 21:28:30.269186 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56cf4b4989-2cwl5"] Mar 12 21:28:30.269987 master-0 kubenswrapper[31456]: I0312 21:28:30.269458 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="dnsmasq-dns" containerID="cri-o://50e6ac7bcddf291caecce1ffc99f56d8309a34ee4b8164f9ec728106f5864497" gracePeriod=10 Mar 12 21:28:30.477664 master-0 kubenswrapper[31456]: I0312 21:28:30.476660 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 21:28:30.782135 master-0 kubenswrapper[31456]: I0312 21:28:30.778190 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 21:28:30.782135 master-0 kubenswrapper[31456]: I0312 21:28:30.778552 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 21:28:34.216414 master-0 kubenswrapper[31456]: I0312 21:28:34.216359 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.246:5353: connect: connection refused" Mar 12 21:28:34.238599 master-0 kubenswrapper[31456]: W0312 21:28:34.238549 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa7b89ff_9555_485b_af52_9624240b80b4.slice/crio-92f1d8c830f0639e4e8364f11291a830a59f8dce950aae748d1f199d26cbc090 WatchSource:0}: Error finding container 92f1d8c830f0639e4e8364f11291a830a59f8dce950aae748d1f199d26cbc090: Status 404 returned error can't find the container with id 92f1d8c830f0639e4e8364f11291a830a59f8dce950aae748d1f199d26cbc090 Mar 12 21:28:34.472192 master-0 kubenswrapper[31456]: I0312 21:28:34.472040 31456 generic.go:334] "Generic (PLEG): container finished" podID="b41a87ae-50a2-4490-891e-99a17d655797" containerID="50e6ac7bcddf291caecce1ffc99f56d8309a34ee4b8164f9ec728106f5864497" exitCode=0 Mar 12 21:28:34.472192 master-0 kubenswrapper[31456]: I0312 21:28:34.472119 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" event={"ID":"b41a87ae-50a2-4490-891e-99a17d655797","Type":"ContainerDied","Data":"50e6ac7bcddf291caecce1ffc99f56d8309a34ee4b8164f9ec728106f5864497"} Mar 12 21:28:34.474675 master-0 kubenswrapper[31456]: I0312 21:28:34.474632 31456 generic.go:334] "Generic (PLEG): container finished" podID="c59d7ee2-3288-42f9-9202-abedc026040d" containerID="725acccafd28a1ddf7b25fe2a562bf0fadce02f520cadf660f3a796c6757787f" exitCode=0 Mar 12 21:28:34.474772 master-0 kubenswrapper[31456]: I0312 21:28:34.474684 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" event={"ID":"c59d7ee2-3288-42f9-9202-abedc026040d","Type":"ContainerDied","Data":"725acccafd28a1ddf7b25fe2a562bf0fadce02f520cadf660f3a796c6757787f"} Mar 12 21:28:34.477553 master-0 kubenswrapper[31456]: I0312 21:28:34.477501 31456 generic.go:334] "Generic (PLEG): container finished" podID="7cd86859-a26e-4b51-9c89-175cf23ef2f1" containerID="8180fbd11113120a095e440d6f4fdf495d92b426aa049996fff558596e39fa21" exitCode=0 Mar 12 21:28:34.477677 master-0 kubenswrapper[31456]: I0312 21:28:34.477578 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbhd" event={"ID":"7cd86859-a26e-4b51-9c89-175cf23ef2f1","Type":"ContainerDied","Data":"8180fbd11113120a095e440d6f4fdf495d92b426aa049996fff558596e39fa21"} Mar 12 21:28:34.479164 master-0 kubenswrapper[31456]: I0312 21:28:34.479118 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa7b89ff-9555-485b-af52-9624240b80b4","Type":"ContainerStarted","Data":"92f1d8c830f0639e4e8364f11291a830a59f8dce950aae748d1f199d26cbc090"} Mar 12 21:28:34.783962 master-0 kubenswrapper[31456]: I0312 21:28:34.783907 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:28:34.858865 master-0 kubenswrapper[31456]: I0312 21:28:34.858801 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-svc\") pod \"b41a87ae-50a2-4490-891e-99a17d655797\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " Mar 12 21:28:34.859026 master-0 kubenswrapper[31456]: I0312 21:28:34.859002 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-sb\") pod \"b41a87ae-50a2-4490-891e-99a17d655797\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " Mar 12 21:28:34.859124 master-0 kubenswrapper[31456]: I0312 21:28:34.859104 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppr99\" (UniqueName: \"kubernetes.io/projected/b41a87ae-50a2-4490-891e-99a17d655797-kube-api-access-ppr99\") pod \"b41a87ae-50a2-4490-891e-99a17d655797\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " Mar 12 21:28:34.859178 master-0 kubenswrapper[31456]: I0312 21:28:34.859165 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-swift-storage-0\") pod \"b41a87ae-50a2-4490-891e-99a17d655797\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " Mar 12 21:28:34.859248 master-0 kubenswrapper[31456]: I0312 21:28:34.859220 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-nb\") pod \"b41a87ae-50a2-4490-891e-99a17d655797\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " Mar 12 21:28:34.859293 master-0 kubenswrapper[31456]: I0312 21:28:34.859247 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-config\") pod \"b41a87ae-50a2-4490-891e-99a17d655797\" (UID: \"b41a87ae-50a2-4490-891e-99a17d655797\") " Mar 12 21:28:34.871437 master-0 kubenswrapper[31456]: I0312 21:28:34.868808 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41a87ae-50a2-4490-891e-99a17d655797-kube-api-access-ppr99" (OuterVolumeSpecName: "kube-api-access-ppr99") pod "b41a87ae-50a2-4490-891e-99a17d655797" (UID: "b41a87ae-50a2-4490-891e-99a17d655797"). InnerVolumeSpecName "kube-api-access-ppr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:34.962434 master-0 kubenswrapper[31456]: I0312 21:28:34.962385 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppr99\" (UniqueName: \"kubernetes.io/projected/b41a87ae-50a2-4490-891e-99a17d655797-kube-api-access-ppr99\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:35.004687 master-0 kubenswrapper[31456]: I0312 21:28:35.004609 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-config" (OuterVolumeSpecName: "config") pod "b41a87ae-50a2-4490-891e-99a17d655797" (UID: "b41a87ae-50a2-4490-891e-99a17d655797"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:28:35.011583 master-0 kubenswrapper[31456]: I0312 21:28:35.011450 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b41a87ae-50a2-4490-891e-99a17d655797" (UID: "b41a87ae-50a2-4490-891e-99a17d655797"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:28:35.035577 master-0 kubenswrapper[31456]: I0312 21:28:35.035489 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b41a87ae-50a2-4490-891e-99a17d655797" (UID: "b41a87ae-50a2-4490-891e-99a17d655797"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:28:35.060301 master-0 kubenswrapper[31456]: I0312 21:28:35.060222 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b41a87ae-50a2-4490-891e-99a17d655797" (UID: "b41a87ae-50a2-4490-891e-99a17d655797"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:28:35.065866 master-0 kubenswrapper[31456]: I0312 21:28:35.065794 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:35.065866 master-0 kubenswrapper[31456]: I0312 21:28:35.065862 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:35.065986 master-0 kubenswrapper[31456]: I0312 21:28:35.065877 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:35.065986 master-0 kubenswrapper[31456]: I0312 21:28:35.065920 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:35.083149 master-0 kubenswrapper[31456]: I0312 21:28:35.081230 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b41a87ae-50a2-4490-891e-99a17d655797" (UID: "b41a87ae-50a2-4490-891e-99a17d655797"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:28:35.168312 master-0 kubenswrapper[31456]: I0312 21:28:35.168252 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b41a87ae-50a2-4490-891e-99a17d655797-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:35.494051 master-0 kubenswrapper[31456]: I0312 21:28:35.493974 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" Mar 12 21:28:35.495091 master-0 kubenswrapper[31456]: I0312 21:28:35.495043 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56cf4b4989-2cwl5" event={"ID":"b41a87ae-50a2-4490-891e-99a17d655797","Type":"ContainerDied","Data":"13500e59ddb6b483a91cce9c62893494821032c3cb2effcc83e0ed93798d18a1"} Mar 12 21:28:35.495091 master-0 kubenswrapper[31456]: I0312 21:28:35.495085 31456 scope.go:117] "RemoveContainer" containerID="50e6ac7bcddf291caecce1ffc99f56d8309a34ee4b8164f9ec728106f5864497" Mar 12 21:28:35.499410 master-0 kubenswrapper[31456]: I0312 21:28:35.499296 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"91b65fb0-ac42-43d0-a834-989fac8d4fd5","Type":"ContainerStarted","Data":"5a0ec06ba14840847def651f891e40e9d133d45b52d7d96cd47fde8c624dfd39"} Mar 12 21:28:35.499532 master-0 kubenswrapper[31456]: I0312 21:28:35.499444 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:35.506522 master-0 kubenswrapper[31456]: I0312 21:28:35.506245 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"77a678a91f1db2f5bafe7bbe1281f768f31e37e9872b71d021cdf9d7ee263854"} Mar 12 21:28:35.510198 master-0 kubenswrapper[31456]: I0312 21:28:35.510170 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa7b89ff-9555-485b-af52-9624240b80b4","Type":"ContainerStarted","Data":"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa"} Mar 12 21:28:35.510333 master-0 kubenswrapper[31456]: I0312 21:28:35.510197 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa7b89ff-9555-485b-af52-9624240b80b4","Type":"ContainerStarted","Data":"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a"} Mar 12 21:28:35.527044 master-0 kubenswrapper[31456]: I0312 21:28:35.526973 31456 scope.go:117] "RemoveContainer" containerID="1e23a059cf13a580190b2634ebebf8ccf104e1975300b1347346f6fc4a311d67" Mar 12 21:28:35.535493 master-0 kubenswrapper[31456]: I0312 21:28:35.535409 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.847744702 podStartE2EDuration="16.535370693s" podCreationTimestamp="2026-03-12 21:28:19 +0000 UTC" firstStartedPulling="2026-03-12 21:28:20.691009936 +0000 UTC m=+1161.765615264" lastFinishedPulling="2026-03-12 21:28:34.378635917 +0000 UTC m=+1175.453241255" observedRunningTime="2026-03-12 21:28:35.528656961 +0000 UTC m=+1176.603262289" watchObservedRunningTime="2026-03-12 21:28:35.535370693 +0000 UTC m=+1176.609976021" Mar 12 21:28:35.590941 master-0 kubenswrapper[31456]: I0312 21:28:35.580069 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=8.580025564 podStartE2EDuration="8.580025564s" podCreationTimestamp="2026-03-12 21:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:35.562197102 +0000 UTC m=+1176.636802430" watchObservedRunningTime="2026-03-12 21:28:35.580025564 +0000 UTC m=+1176.654630892" Mar 12 21:28:35.599827 master-0 kubenswrapper[31456]: I0312 21:28:35.599650 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 12 21:28:35.613961 master-0 kubenswrapper[31456]: I0312 21:28:35.612756 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56cf4b4989-2cwl5"] Mar 12 21:28:35.643110 master-0 kubenswrapper[31456]: I0312 21:28:35.643041 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56cf4b4989-2cwl5"] Mar 12 21:28:36.153562 master-0 kubenswrapper[31456]: I0312 21:28:36.153521 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:36.200628 master-0 kubenswrapper[31456]: I0312 21:28:36.198982 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:36.205255 master-0 kubenswrapper[31456]: I0312 21:28:36.204260 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-config-data\") pod \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " Mar 12 21:28:36.205255 master-0 kubenswrapper[31456]: I0312 21:28:36.204372 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-combined-ca-bundle\") pod \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " Mar 12 21:28:36.205255 master-0 kubenswrapper[31456]: I0312 21:28:36.204499 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgbpp\" (UniqueName: \"kubernetes.io/projected/7cd86859-a26e-4b51-9c89-175cf23ef2f1-kube-api-access-sgbpp\") pod \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " Mar 12 21:28:36.205255 master-0 kubenswrapper[31456]: I0312 21:28:36.204615 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-scripts\") pod \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\" (UID: \"7cd86859-a26e-4b51-9c89-175cf23ef2f1\") " Mar 12 21:28:36.207736 master-0 kubenswrapper[31456]: I0312 21:28:36.207689 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd86859-a26e-4b51-9c89-175cf23ef2f1-kube-api-access-sgbpp" (OuterVolumeSpecName: "kube-api-access-sgbpp") pod "7cd86859-a26e-4b51-9c89-175cf23ef2f1" (UID: "7cd86859-a26e-4b51-9c89-175cf23ef2f1"). InnerVolumeSpecName "kube-api-access-sgbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:36.210000 master-0 kubenswrapper[31456]: I0312 21:28:36.209931 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-scripts" (OuterVolumeSpecName: "scripts") pod "7cd86859-a26e-4b51-9c89-175cf23ef2f1" (UID: "7cd86859-a26e-4b51-9c89-175cf23ef2f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:36.247901 master-0 kubenswrapper[31456]: I0312 21:28:36.247406 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-config-data" (OuterVolumeSpecName: "config-data") pod "7cd86859-a26e-4b51-9c89-175cf23ef2f1" (UID: "7cd86859-a26e-4b51-9c89-175cf23ef2f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:36.292224 master-0 kubenswrapper[31456]: I0312 21:28:36.292162 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cd86859-a26e-4b51-9c89-175cf23ef2f1" (UID: "7cd86859-a26e-4b51-9c89-175cf23ef2f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:36.307416 master-0 kubenswrapper[31456]: I0312 21:28:36.307373 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-config-data\") pod \"c59d7ee2-3288-42f9-9202-abedc026040d\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " Mar 12 21:28:36.307492 master-0 kubenswrapper[31456]: I0312 21:28:36.307439 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqdpn\" (UniqueName: \"kubernetes.io/projected/c59d7ee2-3288-42f9-9202-abedc026040d-kube-api-access-xqdpn\") pod \"c59d7ee2-3288-42f9-9202-abedc026040d\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " Mar 12 21:28:36.307492 master-0 kubenswrapper[31456]: I0312 21:28:36.307461 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-scripts\") pod \"c59d7ee2-3288-42f9-9202-abedc026040d\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " Mar 12 21:28:36.307557 master-0 kubenswrapper[31456]: I0312 21:28:36.307532 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-combined-ca-bundle\") pod \"c59d7ee2-3288-42f9-9202-abedc026040d\" (UID: \"c59d7ee2-3288-42f9-9202-abedc026040d\") " Mar 12 21:28:36.308171 master-0 kubenswrapper[31456]: I0312 21:28:36.308134 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgbpp\" (UniqueName: \"kubernetes.io/projected/7cd86859-a26e-4b51-9c89-175cf23ef2f1-kube-api-access-sgbpp\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.308171 master-0 kubenswrapper[31456]: I0312 21:28:36.308153 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.308171 master-0 kubenswrapper[31456]: I0312 21:28:36.308163 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.308288 master-0 kubenswrapper[31456]: I0312 21:28:36.308173 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd86859-a26e-4b51-9c89-175cf23ef2f1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.310965 master-0 kubenswrapper[31456]: I0312 21:28:36.310907 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-scripts" (OuterVolumeSpecName: "scripts") pod "c59d7ee2-3288-42f9-9202-abedc026040d" (UID: "c59d7ee2-3288-42f9-9202-abedc026040d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:36.311264 master-0 kubenswrapper[31456]: I0312 21:28:36.311213 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59d7ee2-3288-42f9-9202-abedc026040d-kube-api-access-xqdpn" (OuterVolumeSpecName: "kube-api-access-xqdpn") pod "c59d7ee2-3288-42f9-9202-abedc026040d" (UID: "c59d7ee2-3288-42f9-9202-abedc026040d"). InnerVolumeSpecName "kube-api-access-xqdpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:36.336060 master-0 kubenswrapper[31456]: I0312 21:28:36.336003 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-config-data" (OuterVolumeSpecName: "config-data") pod "c59d7ee2-3288-42f9-9202-abedc026040d" (UID: "c59d7ee2-3288-42f9-9202-abedc026040d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:36.336473 master-0 kubenswrapper[31456]: I0312 21:28:36.336416 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c59d7ee2-3288-42f9-9202-abedc026040d" (UID: "c59d7ee2-3288-42f9-9202-abedc026040d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:36.410823 master-0 kubenswrapper[31456]: I0312 21:28:36.410235 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.410823 master-0 kubenswrapper[31456]: I0312 21:28:36.410277 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqdpn\" (UniqueName: \"kubernetes.io/projected/c59d7ee2-3288-42f9-9202-abedc026040d-kube-api-access-xqdpn\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.410823 master-0 kubenswrapper[31456]: I0312 21:28:36.410287 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.410823 master-0 kubenswrapper[31456]: I0312 21:28:36.410296 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59d7ee2-3288-42f9-9202-abedc026040d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:36.524223 master-0 kubenswrapper[31456]: I0312 21:28:36.524074 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" event={"ID":"c59d7ee2-3288-42f9-9202-abedc026040d","Type":"ContainerDied","Data":"aec8f7704f92963e0fd53d219475e755fc98b7f168916678b530e28e86e15c3f"} Mar 12 21:28:36.524223 master-0 kubenswrapper[31456]: I0312 21:28:36.524138 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec8f7704f92963e0fd53d219475e755fc98b7f168916678b530e28e86e15c3f" Mar 12 21:28:36.524873 master-0 kubenswrapper[31456]: I0312 21:28:36.524660 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hnj5b" Mar 12 21:28:36.534479 master-0 kubenswrapper[31456]: I0312 21:28:36.534399 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"c5386848032cee00b7cd648b08086c6a15c7a3a38276f6ff2eef541820af5ef0"} Mar 12 21:28:36.534703 master-0 kubenswrapper[31456]: I0312 21:28:36.534490 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"93110548-5710-4149-bd72-8e42693c948e","Type":"ContainerStarted","Data":"d225e9db2c53a807f35996815ddf18e3b3d5a169c4d20730529cdb8e9fccc944"} Mar 12 21:28:36.534703 master-0 kubenswrapper[31456]: I0312 21:28:36.534561 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 12 21:28:36.534703 master-0 kubenswrapper[31456]: I0312 21:28:36.534586 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 12 21:28:36.538373 master-0 kubenswrapper[31456]: I0312 21:28:36.538339 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fjbhd" Mar 12 21:28:36.539591 master-0 kubenswrapper[31456]: I0312 21:28:36.539542 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fjbhd" event={"ID":"7cd86859-a26e-4b51-9c89-175cf23ef2f1","Type":"ContainerDied","Data":"13e0c5cad4aed5da3a73886a0bfcce30cceb0f324e9ac0bfaa23bc8cf9f3ca77"} Mar 12 21:28:36.539702 master-0 kubenswrapper[31456]: I0312 21:28:36.539609 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e0c5cad4aed5da3a73886a0bfcce30cceb0f324e9ac0bfaa23bc8cf9f3ca77" Mar 12 21:28:36.615692 master-0 kubenswrapper[31456]: I0312 21:28:36.615614 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=70.78457103 podStartE2EDuration="1m53.615596278s" podCreationTimestamp="2026-03-12 21:26:43 +0000 UTC" firstStartedPulling="2026-03-12 21:26:54.192882726 +0000 UTC m=+1075.267488054" lastFinishedPulling="2026-03-12 21:27:37.023907974 +0000 UTC m=+1118.098513302" observedRunningTime="2026-03-12 21:28:36.585991612 +0000 UTC m=+1177.660596980" watchObservedRunningTime="2026-03-12 21:28:36.615596278 +0000 UTC m=+1177.690201606" Mar 12 21:28:36.709249 master-0 kubenswrapper[31456]: I0312 21:28:36.709173 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 21:28:36.709701 master-0 kubenswrapper[31456]: E0312 21:28:36.709675 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd86859-a26e-4b51-9c89-175cf23ef2f1" containerName="nova-manage" Mar 12 21:28:36.709701 master-0 kubenswrapper[31456]: I0312 21:28:36.709694 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd86859-a26e-4b51-9c89-175cf23ef2f1" containerName="nova-manage" Mar 12 21:28:36.709792 master-0 kubenswrapper[31456]: E0312 21:28:36.709723 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="dnsmasq-dns" Mar 12 21:28:36.709792 master-0 kubenswrapper[31456]: I0312 21:28:36.709730 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="dnsmasq-dns" Mar 12 21:28:36.709792 master-0 kubenswrapper[31456]: E0312 21:28:36.709745 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="init" Mar 12 21:28:36.709792 master-0 kubenswrapper[31456]: I0312 21:28:36.709751 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="init" Mar 12 21:28:36.709792 master-0 kubenswrapper[31456]: E0312 21:28:36.709769 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59d7ee2-3288-42f9-9202-abedc026040d" containerName="nova-cell1-conductor-db-sync" Mar 12 21:28:36.709792 master-0 kubenswrapper[31456]: I0312 21:28:36.709777 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59d7ee2-3288-42f9-9202-abedc026040d" containerName="nova-cell1-conductor-db-sync" Mar 12 21:28:36.710032 master-0 kubenswrapper[31456]: I0312 21:28:36.710012 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd86859-a26e-4b51-9c89-175cf23ef2f1" containerName="nova-manage" Mar 12 21:28:36.710075 master-0 kubenswrapper[31456]: I0312 21:28:36.710033 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41a87ae-50a2-4490-891e-99a17d655797" containerName="dnsmasq-dns" Mar 12 21:28:36.710075 master-0 kubenswrapper[31456]: I0312 21:28:36.710052 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59d7ee2-3288-42f9-9202-abedc026040d" containerName="nova-cell1-conductor-db-sync" Mar 12 21:28:36.710774 master-0 kubenswrapper[31456]: I0312 21:28:36.710750 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.713222 master-0 kubenswrapper[31456]: I0312 21:28:36.713173 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 21:28:36.722368 master-0 kubenswrapper[31456]: I0312 21:28:36.722311 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07b1ad6-1e59-438b-acee-e722668be12d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.722575 master-0 kubenswrapper[31456]: I0312 21:28:36.722441 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07b1ad6-1e59-438b-acee-e722668be12d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.723116 master-0 kubenswrapper[31456]: I0312 21:28:36.723093 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2887\" (UniqueName: \"kubernetes.io/projected/a07b1ad6-1e59-438b-acee-e722668be12d-kube-api-access-k2887\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.743738 master-0 kubenswrapper[31456]: I0312 21:28:36.743668 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 21:28:36.835860 master-0 kubenswrapper[31456]: I0312 21:28:36.833410 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2887\" (UniqueName: \"kubernetes.io/projected/a07b1ad6-1e59-438b-acee-e722668be12d-kube-api-access-k2887\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.835860 master-0 kubenswrapper[31456]: I0312 21:28:36.833495 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07b1ad6-1e59-438b-acee-e722668be12d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.835860 master-0 kubenswrapper[31456]: I0312 21:28:36.833529 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07b1ad6-1e59-438b-acee-e722668be12d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.841830 master-0 kubenswrapper[31456]: I0312 21:28:36.839732 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07b1ad6-1e59-438b-acee-e722668be12d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.854998 master-0 kubenswrapper[31456]: I0312 21:28:36.854948 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07b1ad6-1e59-438b-acee-e722668be12d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.858825 master-0 kubenswrapper[31456]: I0312 21:28:36.855446 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2887\" (UniqueName: \"kubernetes.io/projected/a07b1ad6-1e59-438b-acee-e722668be12d-kube-api-access-k2887\") pod \"nova-cell1-conductor-0\" (UID: \"a07b1ad6-1e59-438b-acee-e722668be12d\") " pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:36.931450 master-0 kubenswrapper[31456]: I0312 21:28:36.931375 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:36.931654 master-0 kubenswrapper[31456]: I0312 21:28:36.931628 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-log" containerID="cri-o://3b7e4d1a5b83b8d16214618d2bc1bf47d9a2ee5baa56bbf1dd86d7081e40187e" gracePeriod=30 Mar 12 21:28:36.931826 master-0 kubenswrapper[31456]: I0312 21:28:36.931786 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-api" containerID="cri-o://db98bee1bcf9804748089488ebc128f3520f410758576e43ef795429c434eee7" gracePeriod=30 Mar 12 21:28:36.958118 master-0 kubenswrapper[31456]: I0312 21:28:36.958024 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:36.971970 master-0 kubenswrapper[31456]: I0312 21:28:36.971918 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:36.972439 master-0 kubenswrapper[31456]: I0312 21:28:36.972413 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1e689ffe-338d-4b20-a02e-6819b05cf05d" containerName="nova-scheduler-scheduler" containerID="cri-o://cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e" gracePeriod=30 Mar 12 21:28:37.031444 master-0 kubenswrapper[31456]: I0312 21:28:37.031375 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:37.208824 master-0 kubenswrapper[31456]: I0312 21:28:37.205790 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41a87ae-50a2-4490-891e-99a17d655797" path="/var/lib/kubelet/pods/b41a87ae-50a2-4490-891e-99a17d655797/volumes" Mar 12 21:28:37.415082 master-0 kubenswrapper[31456]: I0312 21:28:37.413170 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 12 21:28:37.575844 master-0 kubenswrapper[31456]: I0312 21:28:37.575423 31456 generic.go:334] "Generic (PLEG): container finished" podID="4947333f-6917-4b79-830e-171f682e0309" containerID="3b7e4d1a5b83b8d16214618d2bc1bf47d9a2ee5baa56bbf1dd86d7081e40187e" exitCode=143 Mar 12 21:28:37.575844 master-0 kubenswrapper[31456]: I0312 21:28:37.575717 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4947333f-6917-4b79-830e-171f682e0309","Type":"ContainerDied","Data":"3b7e4d1a5b83b8d16214618d2bc1bf47d9a2ee5baa56bbf1dd86d7081e40187e"} Mar 12 21:28:37.576779 master-0 kubenswrapper[31456]: I0312 21:28:37.576629 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 21:28:37.578001 master-0 kubenswrapper[31456]: I0312 21:28:37.577261 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-log" containerID="cri-o://93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a" gracePeriod=30 Mar 12 21:28:37.578001 master-0 kubenswrapper[31456]: I0312 21:28:37.577415 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-metadata" containerID="cri-o://9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa" gracePeriod=30 Mar 12 21:28:37.596882 master-0 kubenswrapper[31456]: W0312 21:28:37.590265 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07b1ad6_1e59_438b_acee_e722668be12d.slice/crio-531a0e03291d743679c27f4b5fc96cf5db68e34575a74943b8de531aee265b97 WatchSource:0}: Error finding container 531a0e03291d743679c27f4b5fc96cf5db68e34575a74943b8de531aee265b97: Status 404 returned error can't find the container with id 531a0e03291d743679c27f4b5fc96cf5db68e34575a74943b8de531aee265b97 Mar 12 21:28:37.773826 master-0 kubenswrapper[31456]: E0312 21:28:37.772979 31456 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa7b89ff_9555_485b_af52_9624240b80b4.slice/crio-93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa7b89ff_9555_485b_af52_9624240b80b4.slice/crio-conmon-93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:28:37.773826 master-0 kubenswrapper[31456]: E0312 21:28:37.773034 31456 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa7b89ff_9555_485b_af52_9624240b80b4.slice/crio-9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa.scope\": RecentStats: unable to find data in memory cache]" Mar 12 21:28:37.900108 master-0 kubenswrapper[31456]: I0312 21:28:37.899944 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 21:28:37.900108 master-0 kubenswrapper[31456]: I0312 21:28:37.900031 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 21:28:38.148115 master-0 kubenswrapper[31456]: I0312 21:28:38.148063 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:38.293080 master-0 kubenswrapper[31456]: I0312 21:28:38.289590 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-config-data\") pod \"aa7b89ff-9555-485b-af52-9624240b80b4\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " Mar 12 21:28:38.293080 master-0 kubenswrapper[31456]: I0312 21:28:38.289878 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-combined-ca-bundle\") pod \"aa7b89ff-9555-485b-af52-9624240b80b4\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " Mar 12 21:28:38.293080 master-0 kubenswrapper[31456]: I0312 21:28:38.290083 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-nova-metadata-tls-certs\") pod \"aa7b89ff-9555-485b-af52-9624240b80b4\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " Mar 12 21:28:38.293080 master-0 kubenswrapper[31456]: I0312 21:28:38.290197 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktsqm\" (UniqueName: \"kubernetes.io/projected/aa7b89ff-9555-485b-af52-9624240b80b4-kube-api-access-ktsqm\") pod \"aa7b89ff-9555-485b-af52-9624240b80b4\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " Mar 12 21:28:38.293080 master-0 kubenswrapper[31456]: I0312 21:28:38.290271 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa7b89ff-9555-485b-af52-9624240b80b4-logs\") pod \"aa7b89ff-9555-485b-af52-9624240b80b4\" (UID: \"aa7b89ff-9555-485b-af52-9624240b80b4\") " Mar 12 21:28:38.296953 master-0 kubenswrapper[31456]: I0312 21:28:38.296897 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7b89ff-9555-485b-af52-9624240b80b4-kube-api-access-ktsqm" (OuterVolumeSpecName: "kube-api-access-ktsqm") pod "aa7b89ff-9555-485b-af52-9624240b80b4" (UID: "aa7b89ff-9555-485b-af52-9624240b80b4"). InnerVolumeSpecName "kube-api-access-ktsqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:38.297196 master-0 kubenswrapper[31456]: I0312 21:28:38.297166 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7b89ff-9555-485b-af52-9624240b80b4-logs" (OuterVolumeSpecName: "logs") pod "aa7b89ff-9555-485b-af52-9624240b80b4" (UID: "aa7b89ff-9555-485b-af52-9624240b80b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:28:38.346542 master-0 kubenswrapper[31456]: I0312 21:28:38.346470 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa7b89ff-9555-485b-af52-9624240b80b4" (UID: "aa7b89ff-9555-485b-af52-9624240b80b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:38.350945 master-0 kubenswrapper[31456]: I0312 21:28:38.350901 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aa7b89ff-9555-485b-af52-9624240b80b4" (UID: "aa7b89ff-9555-485b-af52-9624240b80b4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:38.358284 master-0 kubenswrapper[31456]: I0312 21:28:38.358221 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-config-data" (OuterVolumeSpecName: "config-data") pod "aa7b89ff-9555-485b-af52-9624240b80b4" (UID: "aa7b89ff-9555-485b-af52-9624240b80b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:38.393508 master-0 kubenswrapper[31456]: I0312 21:28:38.393441 31456 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:38.393508 master-0 kubenswrapper[31456]: I0312 21:28:38.393493 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktsqm\" (UniqueName: \"kubernetes.io/projected/aa7b89ff-9555-485b-af52-9624240b80b4-kube-api-access-ktsqm\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:38.393508 master-0 kubenswrapper[31456]: I0312 21:28:38.393506 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa7b89ff-9555-485b-af52-9624240b80b4-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:38.393508 master-0 kubenswrapper[31456]: I0312 21:28:38.393516 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:38.394009 master-0 kubenswrapper[31456]: I0312 21:28:38.393525 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7b89ff-9555-485b-af52-9624240b80b4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:38.591982 master-0 kubenswrapper[31456]: I0312 21:28:38.591799 31456 generic.go:334] "Generic (PLEG): container finished" podID="aa7b89ff-9555-485b-af52-9624240b80b4" containerID="9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa" exitCode=0 Mar 12 21:28:38.591982 master-0 kubenswrapper[31456]: I0312 21:28:38.591865 31456 generic.go:334] "Generic (PLEG): container finished" podID="aa7b89ff-9555-485b-af52-9624240b80b4" containerID="93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a" exitCode=143 Mar 12 21:28:38.591982 master-0 kubenswrapper[31456]: I0312 21:28:38.591873 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa7b89ff-9555-485b-af52-9624240b80b4","Type":"ContainerDied","Data":"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa"} Mar 12 21:28:38.591982 master-0 kubenswrapper[31456]: I0312 21:28:38.591877 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:38.592838 master-0 kubenswrapper[31456]: I0312 21:28:38.591934 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa7b89ff-9555-485b-af52-9624240b80b4","Type":"ContainerDied","Data":"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a"} Mar 12 21:28:38.592838 master-0 kubenswrapper[31456]: I0312 21:28:38.592573 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa7b89ff-9555-485b-af52-9624240b80b4","Type":"ContainerDied","Data":"92f1d8c830f0639e4e8364f11291a830a59f8dce950aae748d1f199d26cbc090"} Mar 12 21:28:38.592838 master-0 kubenswrapper[31456]: I0312 21:28:38.591952 31456 scope.go:117] "RemoveContainer" containerID="9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa" Mar 12 21:28:38.595169 master-0 kubenswrapper[31456]: I0312 21:28:38.595064 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a07b1ad6-1e59-438b-acee-e722668be12d","Type":"ContainerStarted","Data":"339dc54b1c6fa812452d04f846cd64ab5dc4b4f890092ff12e11c3c29819dd31"} Mar 12 21:28:38.595255 master-0 kubenswrapper[31456]: I0312 21:28:38.595190 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a07b1ad6-1e59-438b-acee-e722668be12d","Type":"ContainerStarted","Data":"531a0e03291d743679c27f4b5fc96cf5db68e34575a74943b8de531aee265b97"} Mar 12 21:28:38.595379 master-0 kubenswrapper[31456]: I0312 21:28:38.595342 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:38.639128 master-0 kubenswrapper[31456]: I0312 21:28:38.639021 31456 scope.go:117] "RemoveContainer" containerID="93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a" Mar 12 21:28:38.671140 master-0 kubenswrapper[31456]: I0312 21:28:38.671073 31456 scope.go:117] "RemoveContainer" containerID="9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa" Mar 12 21:28:38.677391 master-0 kubenswrapper[31456]: E0312 21:28:38.677327 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa\": container with ID starting with 9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa not found: ID does not exist" containerID="9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa" Mar 12 21:28:38.677489 master-0 kubenswrapper[31456]: I0312 21:28:38.677388 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa"} err="failed to get container status \"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa\": rpc error: code = NotFound desc = could not find container \"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa\": container with ID starting with 9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa not found: ID does not exist" Mar 12 21:28:38.677489 master-0 kubenswrapper[31456]: I0312 21:28:38.677411 31456 scope.go:117] "RemoveContainer" containerID="93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a" Mar 12 21:28:38.677837 master-0 kubenswrapper[31456]: E0312 21:28:38.677771 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a\": container with ID starting with 93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a not found: ID does not exist" containerID="93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a" Mar 12 21:28:38.677837 master-0 kubenswrapper[31456]: I0312 21:28:38.677828 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a"} err="failed to get container status \"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a\": rpc error: code = NotFound desc = could not find container \"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a\": container with ID starting with 93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a not found: ID does not exist" Mar 12 21:28:38.677949 master-0 kubenswrapper[31456]: I0312 21:28:38.677842 31456 scope.go:117] "RemoveContainer" containerID="9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa" Mar 12 21:28:38.678289 master-0 kubenswrapper[31456]: I0312 21:28:38.678210 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa"} err="failed to get container status \"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa\": rpc error: code = NotFound desc = could not find container \"9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa\": container with ID starting with 9cf1022797215700ae2dfb1cd4ff3abc0ba9f76621f8008ee839844da8f12aaa not found: ID does not exist" Mar 12 21:28:38.678289 master-0 kubenswrapper[31456]: I0312 21:28:38.678272 31456 scope.go:117] "RemoveContainer" containerID="93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a" Mar 12 21:28:38.678611 master-0 kubenswrapper[31456]: I0312 21:28:38.678551 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a"} err="failed to get container status \"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a\": rpc error: code = NotFound desc = could not find container \"93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a\": container with ID starting with 93b6c8dec6e8a70ce3a3b9803d31bcb76af9e9afb9d49432713f94efa54cd95a not found: ID does not exist" Mar 12 21:28:38.980437 master-0 kubenswrapper[31456]: I0312 21:28:38.980230 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.980193479 podStartE2EDuration="2.980193479s" podCreationTimestamp="2026-03-12 21:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:38.959828606 +0000 UTC m=+1180.034433974" watchObservedRunningTime="2026-03-12 21:28:38.980193479 +0000 UTC m=+1180.054798837" Mar 12 21:28:39.032318 master-0 kubenswrapper[31456]: I0312 21:28:39.031833 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:39.050837 master-0 kubenswrapper[31456]: I0312 21:28:39.050568 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:39.086746 master-0 kubenswrapper[31456]: I0312 21:28:39.086662 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:39.087320 master-0 kubenswrapper[31456]: E0312 21:28:39.087289 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-log" Mar 12 21:28:39.087320 master-0 kubenswrapper[31456]: I0312 21:28:39.087315 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-log" Mar 12 21:28:39.087423 master-0 kubenswrapper[31456]: E0312 21:28:39.087399 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-metadata" Mar 12 21:28:39.087423 master-0 kubenswrapper[31456]: I0312 21:28:39.087409 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-metadata" Mar 12 21:28:39.087778 master-0 kubenswrapper[31456]: I0312 21:28:39.087747 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-log" Mar 12 21:28:39.087778 master-0 kubenswrapper[31456]: I0312 21:28:39.087770 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" containerName="nova-metadata-metadata" Mar 12 21:28:39.089383 master-0 kubenswrapper[31456]: I0312 21:28:39.089346 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:39.092240 master-0 kubenswrapper[31456]: I0312 21:28:39.092194 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 21:28:39.092835 master-0 kubenswrapper[31456]: I0312 21:28:39.092782 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 21:28:39.110317 master-0 kubenswrapper[31456]: I0312 21:28:39.110242 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:39.184834 master-0 kubenswrapper[31456]: I0312 21:28:39.182674 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-conductor-0" podUID="93110548-5710-4149-bd72-8e42693c948e" containerName="ironic-conductor" probeResult="failure" output=< Mar 12 21:28:39.184834 master-0 kubenswrapper[31456]: ironic-conductor-0 is offline Mar 12 21:28:39.184834 master-0 kubenswrapper[31456]: > Mar 12 21:28:39.209833 master-0 kubenswrapper[31456]: I0312 21:28:39.209459 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7b89ff-9555-485b-af52-9624240b80b4" path="/var/lib/kubelet/pods/aa7b89ff-9555-485b-af52-9624240b80b4/volumes" Mar 12 21:28:39.250896 master-0 kubenswrapper[31456]: I0312 21:28:39.250263 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-config-data\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.250896 master-0 kubenswrapper[31456]: I0312 21:28:39.250880 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcj6\" (UniqueName: \"kubernetes.io/projected/d5d71af3-d4c9-4246-b9c2-276fe8433018-kube-api-access-cfcj6\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.251132 master-0 kubenswrapper[31456]: I0312 21:28:39.250942 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.253853 master-0 kubenswrapper[31456]: I0312 21:28:39.251235 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.253853 master-0 kubenswrapper[31456]: I0312 21:28:39.251308 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5d71af3-d4c9-4246-b9c2-276fe8433018-logs\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.353399 master-0 kubenswrapper[31456]: I0312 21:28:39.353336 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcj6\" (UniqueName: \"kubernetes.io/projected/d5d71af3-d4c9-4246-b9c2-276fe8433018-kube-api-access-cfcj6\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.353610 master-0 kubenswrapper[31456]: I0312 21:28:39.353429 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.353610 master-0 kubenswrapper[31456]: I0312 21:28:39.353483 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.353610 master-0 kubenswrapper[31456]: I0312 21:28:39.353506 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5d71af3-d4c9-4246-b9c2-276fe8433018-logs\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.353709 master-0 kubenswrapper[31456]: I0312 21:28:39.353605 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-config-data\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.354976 master-0 kubenswrapper[31456]: I0312 21:28:39.354938 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5d71af3-d4c9-4246-b9c2-276fe8433018-logs\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.357946 master-0 kubenswrapper[31456]: I0312 21:28:39.357665 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-config-data\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.360353 master-0 kubenswrapper[31456]: I0312 21:28:39.360306 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.375189 master-0 kubenswrapper[31456]: I0312 21:28:39.375151 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.376376 master-0 kubenswrapper[31456]: I0312 21:28:39.376308 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcj6\" (UniqueName: \"kubernetes.io/projected/d5d71af3-d4c9-4246-b9c2-276fe8433018-kube-api-access-cfcj6\") pod \"nova-metadata-0\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " pod="openstack/nova-metadata-0" Mar 12 21:28:39.468197 master-0 kubenswrapper[31456]: I0312 21:28:39.467965 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:28:40.000979 master-0 kubenswrapper[31456]: I0312 21:28:40.000890 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:28:40.027282 master-0 kubenswrapper[31456]: E0312 21:28:40.027167 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 21:28:40.028860 master-0 kubenswrapper[31456]: E0312 21:28:40.028791 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 21:28:40.031227 master-0 kubenswrapper[31456]: E0312 21:28:40.031179 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 21:28:40.031345 master-0 kubenswrapper[31456]: E0312 21:28:40.031228 31456 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1e689ffe-338d-4b20-a02e-6819b05cf05d" containerName="nova-scheduler-scheduler" Mar 12 21:28:40.662059 master-0 kubenswrapper[31456]: I0312 21:28:40.661973 31456 generic.go:334] "Generic (PLEG): container finished" podID="4947333f-6917-4b79-830e-171f682e0309" containerID="db98bee1bcf9804748089488ebc128f3520f410758576e43ef795429c434eee7" exitCode=0 Mar 12 21:28:40.662301 master-0 kubenswrapper[31456]: I0312 21:28:40.662118 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4947333f-6917-4b79-830e-171f682e0309","Type":"ContainerDied","Data":"db98bee1bcf9804748089488ebc128f3520f410758576e43ef795429c434eee7"} Mar 12 21:28:40.665556 master-0 kubenswrapper[31456]: I0312 21:28:40.665502 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5d71af3-d4c9-4246-b9c2-276fe8433018","Type":"ContainerStarted","Data":"4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a"} Mar 12 21:28:40.665751 master-0 kubenswrapper[31456]: I0312 21:28:40.665725 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5d71af3-d4c9-4246-b9c2-276fe8433018","Type":"ContainerStarted","Data":"d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797"} Mar 12 21:28:40.665918 master-0 kubenswrapper[31456]: I0312 21:28:40.665888 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5d71af3-d4c9-4246-b9c2-276fe8433018","Type":"ContainerStarted","Data":"b4575d1907a1c9da4e49132183b5a048d162a84087cd3067e912f36f682c9b56"} Mar 12 21:28:40.725955 master-0 kubenswrapper[31456]: I0312 21:28:40.722721 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.722699982 podStartE2EDuration="1.722699982s" podCreationTimestamp="2026-03-12 21:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:40.711187554 +0000 UTC m=+1181.785792882" watchObservedRunningTime="2026-03-12 21:28:40.722699982 +0000 UTC m=+1181.797305310" Mar 12 21:28:40.744561 master-0 kubenswrapper[31456]: I0312 21:28:40.744501 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:28:40.788266 master-0 kubenswrapper[31456]: I0312 21:28:40.788209 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 12 21:28:40.806225 master-0 kubenswrapper[31456]: I0312 21:28:40.806183 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 12 21:28:40.861828 master-0 kubenswrapper[31456]: I0312 21:28:40.861079 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 12 21:28:40.906197 master-0 kubenswrapper[31456]: I0312 21:28:40.906055 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4947333f-6917-4b79-830e-171f682e0309-logs\") pod \"4947333f-6917-4b79-830e-171f682e0309\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " Mar 12 21:28:40.906434 master-0 kubenswrapper[31456]: I0312 21:28:40.906246 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-config-data\") pod \"4947333f-6917-4b79-830e-171f682e0309\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " Mar 12 21:28:40.906434 master-0 kubenswrapper[31456]: I0312 21:28:40.906303 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-combined-ca-bundle\") pod \"4947333f-6917-4b79-830e-171f682e0309\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " Mar 12 21:28:40.906529 master-0 kubenswrapper[31456]: I0312 21:28:40.906476 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgfh6\" (UniqueName: \"kubernetes.io/projected/4947333f-6917-4b79-830e-171f682e0309-kube-api-access-jgfh6\") pod \"4947333f-6917-4b79-830e-171f682e0309\" (UID: \"4947333f-6917-4b79-830e-171f682e0309\") " Mar 12 21:28:40.906529 master-0 kubenswrapper[31456]: I0312 21:28:40.906488 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4947333f-6917-4b79-830e-171f682e0309-logs" (OuterVolumeSpecName: "logs") pod "4947333f-6917-4b79-830e-171f682e0309" (UID: "4947333f-6917-4b79-830e-171f682e0309"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:28:40.908256 master-0 kubenswrapper[31456]: I0312 21:28:40.908211 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4947333f-6917-4b79-830e-171f682e0309-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:40.909918 master-0 kubenswrapper[31456]: I0312 21:28:40.909871 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4947333f-6917-4b79-830e-171f682e0309-kube-api-access-jgfh6" (OuterVolumeSpecName: "kube-api-access-jgfh6") pod "4947333f-6917-4b79-830e-171f682e0309" (UID: "4947333f-6917-4b79-830e-171f682e0309"). InnerVolumeSpecName "kube-api-access-jgfh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:40.937939 master-0 kubenswrapper[31456]: I0312 21:28:40.937866 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4947333f-6917-4b79-830e-171f682e0309" (UID: "4947333f-6917-4b79-830e-171f682e0309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:40.947233 master-0 kubenswrapper[31456]: I0312 21:28:40.947174 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-config-data" (OuterVolumeSpecName: "config-data") pod "4947333f-6917-4b79-830e-171f682e0309" (UID: "4947333f-6917-4b79-830e-171f682e0309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:41.010989 master-0 kubenswrapper[31456]: I0312 21:28:41.010835 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgfh6\" (UniqueName: \"kubernetes.io/projected/4947333f-6917-4b79-830e-171f682e0309-kube-api-access-jgfh6\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:41.010989 master-0 kubenswrapper[31456]: I0312 21:28:41.010880 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:41.010989 master-0 kubenswrapper[31456]: I0312 21:28:41.010908 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4947333f-6917-4b79-830e-171f682e0309-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:41.677352 master-0 kubenswrapper[31456]: I0312 21:28:41.677247 31456 generic.go:334] "Generic (PLEG): container finished" podID="1e689ffe-338d-4b20-a02e-6819b05cf05d" containerID="cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e" exitCode=0 Mar 12 21:28:41.677352 master-0 kubenswrapper[31456]: I0312 21:28:41.677293 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e689ffe-338d-4b20-a02e-6819b05cf05d","Type":"ContainerDied","Data":"cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e"} Mar 12 21:28:41.679246 master-0 kubenswrapper[31456]: I0312 21:28:41.679190 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4947333f-6917-4b79-830e-171f682e0309","Type":"ContainerDied","Data":"363ff73f2ac9ba41f3de67cfd37b849e62904898ade8febda67c843536fc6e4b"} Mar 12 21:28:41.679330 master-0 kubenswrapper[31456]: I0312 21:28:41.679245 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:28:41.679330 master-0 kubenswrapper[31456]: I0312 21:28:41.679277 31456 scope.go:117] "RemoveContainer" containerID="db98bee1bcf9804748089488ebc128f3520f410758576e43ef795429c434eee7" Mar 12 21:28:41.786511 master-0 kubenswrapper[31456]: I0312 21:28:41.786435 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:28:41.790357 master-0 kubenswrapper[31456]: I0312 21:28:41.790283 31456 scope.go:117] "RemoveContainer" containerID="3b7e4d1a5b83b8d16214618d2bc1bf47d9a2ee5baa56bbf1dd86d7081e40187e" Mar 12 21:28:41.829119 master-0 kubenswrapper[31456]: I0312 21:28:41.829041 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:41.857316 master-0 kubenswrapper[31456]: I0312 21:28:41.844291 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:41.870031 master-0 kubenswrapper[31456]: I0312 21:28:41.869984 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:41.870913 master-0 kubenswrapper[31456]: E0312 21:28:41.870884 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-api" Mar 12 21:28:41.871007 master-0 kubenswrapper[31456]: I0312 21:28:41.870995 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-api" Mar 12 21:28:41.871100 master-0 kubenswrapper[31456]: E0312 21:28:41.871090 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e689ffe-338d-4b20-a02e-6819b05cf05d" containerName="nova-scheduler-scheduler" Mar 12 21:28:41.871158 master-0 kubenswrapper[31456]: I0312 21:28:41.871149 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e689ffe-338d-4b20-a02e-6819b05cf05d" containerName="nova-scheduler-scheduler" Mar 12 21:28:41.871232 master-0 kubenswrapper[31456]: E0312 21:28:41.871222 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-log" Mar 12 21:28:41.871296 master-0 kubenswrapper[31456]: I0312 21:28:41.871286 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-log" Mar 12 21:28:41.871641 master-0 kubenswrapper[31456]: I0312 21:28:41.871628 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-api" Mar 12 21:28:41.871774 master-0 kubenswrapper[31456]: I0312 21:28:41.871757 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e689ffe-338d-4b20-a02e-6819b05cf05d" containerName="nova-scheduler-scheduler" Mar 12 21:28:41.871896 master-0 kubenswrapper[31456]: I0312 21:28:41.871881 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="4947333f-6917-4b79-830e-171f682e0309" containerName="nova-api-log" Mar 12 21:28:41.881029 master-0 kubenswrapper[31456]: I0312 21:28:41.880986 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:28:41.884232 master-0 kubenswrapper[31456]: I0312 21:28:41.884188 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 21:28:41.901055 master-0 kubenswrapper[31456]: I0312 21:28:41.888014 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:41.945770 master-0 kubenswrapper[31456]: I0312 21:28:41.945592 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-combined-ca-bundle\") pod \"1e689ffe-338d-4b20-a02e-6819b05cf05d\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " Mar 12 21:28:41.945770 master-0 kubenswrapper[31456]: I0312 21:28:41.945744 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-config-data\") pod \"1e689ffe-338d-4b20-a02e-6819b05cf05d\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " Mar 12 21:28:41.946037 master-0 kubenswrapper[31456]: I0312 21:28:41.945943 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsqgh\" (UniqueName: \"kubernetes.io/projected/1e689ffe-338d-4b20-a02e-6819b05cf05d-kube-api-access-xsqgh\") pod \"1e689ffe-338d-4b20-a02e-6819b05cf05d\" (UID: \"1e689ffe-338d-4b20-a02e-6819b05cf05d\") " Mar 12 21:28:41.948577 master-0 kubenswrapper[31456]: I0312 21:28:41.948537 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59hgx\" (UniqueName: \"kubernetes.io/projected/1e5d21ea-b20b-4112-8311-c9fc0cc86034-kube-api-access-59hgx\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:41.948683 master-0 kubenswrapper[31456]: I0312 21:28:41.948657 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5d21ea-b20b-4112-8311-c9fc0cc86034-logs\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:41.949003 master-0 kubenswrapper[31456]: I0312 21:28:41.948913 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:41.949233 master-0 kubenswrapper[31456]: I0312 21:28:41.949207 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-config-data\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:41.980483 master-0 kubenswrapper[31456]: I0312 21:28:41.980330 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e689ffe-338d-4b20-a02e-6819b05cf05d-kube-api-access-xsqgh" (OuterVolumeSpecName: "kube-api-access-xsqgh") pod "1e689ffe-338d-4b20-a02e-6819b05cf05d" (UID: "1e689ffe-338d-4b20-a02e-6819b05cf05d"). InnerVolumeSpecName "kube-api-access-xsqgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:41.981664 master-0 kubenswrapper[31456]: I0312 21:28:41.981589 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e689ffe-338d-4b20-a02e-6819b05cf05d" (UID: "1e689ffe-338d-4b20-a02e-6819b05cf05d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:41.995134 master-0 kubenswrapper[31456]: I0312 21:28:41.995056 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-config-data" (OuterVolumeSpecName: "config-data") pod "1e689ffe-338d-4b20-a02e-6819b05cf05d" (UID: "1e689ffe-338d-4b20-a02e-6819b05cf05d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:42.052577 master-0 kubenswrapper[31456]: I0312 21:28:42.052522 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59hgx\" (UniqueName: \"kubernetes.io/projected/1e5d21ea-b20b-4112-8311-c9fc0cc86034-kube-api-access-59hgx\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.053660 master-0 kubenswrapper[31456]: I0312 21:28:42.052636 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5d21ea-b20b-4112-8311-c9fc0cc86034-logs\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.053660 master-0 kubenswrapper[31456]: I0312 21:28:42.052717 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.053660 master-0 kubenswrapper[31456]: I0312 21:28:42.052754 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-config-data\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.053660 master-0 kubenswrapper[31456]: I0312 21:28:42.052900 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:42.053660 master-0 kubenswrapper[31456]: I0312 21:28:42.052914 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e689ffe-338d-4b20-a02e-6819b05cf05d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:42.053660 master-0 kubenswrapper[31456]: I0312 21:28:42.052923 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsqgh\" (UniqueName: \"kubernetes.io/projected/1e689ffe-338d-4b20-a02e-6819b05cf05d-kube-api-access-xsqgh\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:42.055402 master-0 kubenswrapper[31456]: I0312 21:28:42.054286 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5d21ea-b20b-4112-8311-c9fc0cc86034-logs\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.057944 master-0 kubenswrapper[31456]: I0312 21:28:42.057840 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.058889 master-0 kubenswrapper[31456]: I0312 21:28:42.058730 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-config-data\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.070910 master-0 kubenswrapper[31456]: I0312 21:28:42.070769 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59hgx\" (UniqueName: \"kubernetes.io/projected/1e5d21ea-b20b-4112-8311-c9fc0cc86034-kube-api-access-59hgx\") pod \"nova-api-0\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " pod="openstack/nova-api-0" Mar 12 21:28:42.222317 master-0 kubenswrapper[31456]: I0312 21:28:42.222161 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:28:42.720565 master-0 kubenswrapper[31456]: I0312 21:28:42.715038 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:28:42.720565 master-0 kubenswrapper[31456]: I0312 21:28:42.715365 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e689ffe-338d-4b20-a02e-6819b05cf05d","Type":"ContainerDied","Data":"00be4a85f0ff9ce1b4566bc542c200ebcc8cd364d15eff33463e3e7b133391cc"} Mar 12 21:28:42.720565 master-0 kubenswrapper[31456]: I0312 21:28:42.715436 31456 scope.go:117] "RemoveContainer" containerID="cda09692e8b4c5eeb04ed4701693116ed3cd65e008f1414580b606625a09505e" Mar 12 21:28:42.746314 master-0 kubenswrapper[31456]: I0312 21:28:42.746249 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:28:42.819941 master-0 kubenswrapper[31456]: I0312 21:28:42.819869 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:42.844041 master-0 kubenswrapper[31456]: I0312 21:28:42.843966 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:42.874132 master-0 kubenswrapper[31456]: I0312 21:28:42.854911 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:42.874132 master-0 kubenswrapper[31456]: I0312 21:28:42.856599 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:28:42.874132 master-0 kubenswrapper[31456]: I0312 21:28:42.859042 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 21:28:42.874132 master-0 kubenswrapper[31456]: I0312 21:28:42.868030 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:43.003203 master-0 kubenswrapper[31456]: I0312 21:28:43.003132 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-config-data\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.003396 master-0 kubenswrapper[31456]: I0312 21:28:43.003337 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4jc\" (UniqueName: \"kubernetes.io/projected/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-kube-api-access-xg4jc\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.004229 master-0 kubenswrapper[31456]: I0312 21:28:43.004195 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.108095 master-0 kubenswrapper[31456]: I0312 21:28:43.108009 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.108604 master-0 kubenswrapper[31456]: I0312 21:28:43.108217 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-config-data\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.110575 master-0 kubenswrapper[31456]: I0312 21:28:43.108352 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg4jc\" (UniqueName: \"kubernetes.io/projected/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-kube-api-access-xg4jc\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.112653 master-0 kubenswrapper[31456]: I0312 21:28:43.112616 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.120536 master-0 kubenswrapper[31456]: I0312 21:28:43.120477 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-config-data\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.127980 master-0 kubenswrapper[31456]: I0312 21:28:43.124789 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg4jc\" (UniqueName: \"kubernetes.io/projected/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-kube-api-access-xg4jc\") pod \"nova-scheduler-0\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " pod="openstack/nova-scheduler-0" Mar 12 21:28:43.183518 master-0 kubenswrapper[31456]: I0312 21:28:43.183308 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e689ffe-338d-4b20-a02e-6819b05cf05d" path="/var/lib/kubelet/pods/1e689ffe-338d-4b20-a02e-6819b05cf05d/volumes" Mar 12 21:28:43.184169 master-0 kubenswrapper[31456]: I0312 21:28:43.184132 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4947333f-6917-4b79-830e-171f682e0309" path="/var/lib/kubelet/pods/4947333f-6917-4b79-830e-171f682e0309/volumes" Mar 12 21:28:43.195682 master-0 kubenswrapper[31456]: I0312 21:28:43.195613 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:28:43.731531 master-0 kubenswrapper[31456]: I0312 21:28:43.731411 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e5d21ea-b20b-4112-8311-c9fc0cc86034","Type":"ContainerStarted","Data":"a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014"} Mar 12 21:28:43.731531 master-0 kubenswrapper[31456]: I0312 21:28:43.731499 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e5d21ea-b20b-4112-8311-c9fc0cc86034","Type":"ContainerStarted","Data":"2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021"} Mar 12 21:28:43.731531 master-0 kubenswrapper[31456]: I0312 21:28:43.731520 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e5d21ea-b20b-4112-8311-c9fc0cc86034","Type":"ContainerStarted","Data":"f70b7d50cead0cae1553c35c86d1115983b536410c33c84ae46b4e4d3bf912a3"} Mar 12 21:28:43.776906 master-0 kubenswrapper[31456]: I0312 21:28:43.774982 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:28:43.800958 master-0 kubenswrapper[31456]: I0312 21:28:43.800904 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.800886224 podStartE2EDuration="2.800886224s" podCreationTimestamp="2026-03-12 21:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:43.769482093 +0000 UTC m=+1184.844087431" watchObservedRunningTime="2026-03-12 21:28:43.800886224 +0000 UTC m=+1184.875491552" Mar 12 21:28:44.468681 master-0 kubenswrapper[31456]: I0312 21:28:44.468584 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 21:28:44.468681 master-0 kubenswrapper[31456]: I0312 21:28:44.468663 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 21:28:44.757305 master-0 kubenswrapper[31456]: I0312 21:28:44.757158 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26","Type":"ContainerStarted","Data":"3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0"} Mar 12 21:28:44.757305 master-0 kubenswrapper[31456]: I0312 21:28:44.757219 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26","Type":"ContainerStarted","Data":"fdb459a16d4771fde289f7d0f13432d428b0399cb81d0dfc55bad7ccd2483905"} Mar 12 21:28:44.780032 master-0 kubenswrapper[31456]: I0312 21:28:44.779955 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.77993884 podStartE2EDuration="2.77993884s" podCreationTimestamp="2026-03-12 21:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:44.778905864 +0000 UTC m=+1185.853511192" watchObservedRunningTime="2026-03-12 21:28:44.77993884 +0000 UTC m=+1185.854544168" Mar 12 21:28:47.085430 master-0 kubenswrapper[31456]: I0312 21:28:47.085350 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 21:28:48.196931 master-0 kubenswrapper[31456]: I0312 21:28:48.196887 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 21:28:49.469048 master-0 kubenswrapper[31456]: I0312 21:28:49.468759 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 21:28:49.469048 master-0 kubenswrapper[31456]: I0312 21:28:49.468846 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 21:28:50.486171 master-0 kubenswrapper[31456]: I0312 21:28:50.486050 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:28:50.486955 master-0 kubenswrapper[31456]: I0312 21:28:50.486056 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:28:52.223482 master-0 kubenswrapper[31456]: I0312 21:28:52.223435 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 21:28:52.224205 master-0 kubenswrapper[31456]: I0312 21:28:52.224189 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 21:28:53.196803 master-0 kubenswrapper[31456]: I0312 21:28:53.196736 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 21:28:53.240172 master-0 kubenswrapper[31456]: I0312 21:28:53.240124 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 21:28:53.309469 master-0 kubenswrapper[31456]: I0312 21:28:53.308739 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 21:28:53.309613 master-0 kubenswrapper[31456]: I0312 21:28:53.309534 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.8:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 21:28:53.932760 master-0 kubenswrapper[31456]: I0312 21:28:53.932690 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 21:28:55.863699 master-0 kubenswrapper[31456]: I0312 21:28:55.863657 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:55.908526 master-0 kubenswrapper[31456]: I0312 21:28:55.908437 31456 generic.go:334] "Generic (PLEG): container finished" podID="905901a2-2e45-48ea-bedb-0712d96114ff" containerID="9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7" exitCode=137 Mar 12 21:28:55.908526 master-0 kubenswrapper[31456]: I0312 21:28:55.908499 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:55.908903 master-0 kubenswrapper[31456]: I0312 21:28:55.908512 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"905901a2-2e45-48ea-bedb-0712d96114ff","Type":"ContainerDied","Data":"9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7"} Mar 12 21:28:55.908903 master-0 kubenswrapper[31456]: I0312 21:28:55.908625 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"905901a2-2e45-48ea-bedb-0712d96114ff","Type":"ContainerDied","Data":"ab3fe806af8da8cb17e5a160ab7a08d8dea1d3b145b35109ab3aff16be4d5a33"} Mar 12 21:28:55.908903 master-0 kubenswrapper[31456]: I0312 21:28:55.908646 31456 scope.go:117] "RemoveContainer" containerID="9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7" Mar 12 21:28:55.943640 master-0 kubenswrapper[31456]: I0312 21:28:55.943585 31456 scope.go:117] "RemoveContainer" containerID="9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7" Mar 12 21:28:55.944305 master-0 kubenswrapper[31456]: E0312 21:28:55.944258 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7\": container with ID starting with 9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7 not found: ID does not exist" containerID="9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7" Mar 12 21:28:55.944404 master-0 kubenswrapper[31456]: I0312 21:28:55.944339 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7"} err="failed to get container status \"9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7\": rpc error: code = NotFound desc = could not find container \"9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7\": container with ID starting with 9335ee5c8100e2aa9057913b844f402f415fa9a5bbc86885a7738f88360806f7 not found: ID does not exist" Mar 12 21:28:55.984131 master-0 kubenswrapper[31456]: I0312 21:28:55.984025 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-config-data\") pod \"905901a2-2e45-48ea-bedb-0712d96114ff\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " Mar 12 21:28:55.984131 master-0 kubenswrapper[31456]: I0312 21:28:55.984131 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klrn7\" (UniqueName: \"kubernetes.io/projected/905901a2-2e45-48ea-bedb-0712d96114ff-kube-api-access-klrn7\") pod \"905901a2-2e45-48ea-bedb-0712d96114ff\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " Mar 12 21:28:55.985158 master-0 kubenswrapper[31456]: I0312 21:28:55.985122 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-combined-ca-bundle\") pod \"905901a2-2e45-48ea-bedb-0712d96114ff\" (UID: \"905901a2-2e45-48ea-bedb-0712d96114ff\") " Mar 12 21:28:55.988155 master-0 kubenswrapper[31456]: I0312 21:28:55.988102 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905901a2-2e45-48ea-bedb-0712d96114ff-kube-api-access-klrn7" (OuterVolumeSpecName: "kube-api-access-klrn7") pod "905901a2-2e45-48ea-bedb-0712d96114ff" (UID: "905901a2-2e45-48ea-bedb-0712d96114ff"). InnerVolumeSpecName "kube-api-access-klrn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:28:56.015350 master-0 kubenswrapper[31456]: I0312 21:28:56.015215 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "905901a2-2e45-48ea-bedb-0712d96114ff" (UID: "905901a2-2e45-48ea-bedb-0712d96114ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:56.019850 master-0 kubenswrapper[31456]: I0312 21:28:56.019767 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-config-data" (OuterVolumeSpecName: "config-data") pod "905901a2-2e45-48ea-bedb-0712d96114ff" (UID: "905901a2-2e45-48ea-bedb-0712d96114ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:28:56.089449 master-0 kubenswrapper[31456]: I0312 21:28:56.089378 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:56.089570 master-0 kubenswrapper[31456]: I0312 21:28:56.089451 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klrn7\" (UniqueName: \"kubernetes.io/projected/905901a2-2e45-48ea-bedb-0712d96114ff-kube-api-access-klrn7\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:56.089570 master-0 kubenswrapper[31456]: I0312 21:28:56.089478 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/905901a2-2e45-48ea-bedb-0712d96114ff-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:28:56.295220 master-0 kubenswrapper[31456]: I0312 21:28:56.295062 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:56.313624 master-0 kubenswrapper[31456]: I0312 21:28:56.313272 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:56.328935 master-0 kubenswrapper[31456]: I0312 21:28:56.328803 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:56.329703 master-0 kubenswrapper[31456]: E0312 21:28:56.329646 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905901a2-2e45-48ea-bedb-0712d96114ff" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 21:28:56.329789 master-0 kubenswrapper[31456]: I0312 21:28:56.329705 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="905901a2-2e45-48ea-bedb-0712d96114ff" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 21:28:56.332710 master-0 kubenswrapper[31456]: I0312 21:28:56.332652 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="905901a2-2e45-48ea-bedb-0712d96114ff" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 21:28:56.334076 master-0 kubenswrapper[31456]: I0312 21:28:56.334025 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.336695 master-0 kubenswrapper[31456]: I0312 21:28:56.336646 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 21:28:56.340329 master-0 kubenswrapper[31456]: I0312 21:28:56.340266 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:56.346486 master-0 kubenswrapper[31456]: I0312 21:28:56.345510 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 21:28:56.346486 master-0 kubenswrapper[31456]: I0312 21:28:56.345680 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 21:28:56.399672 master-0 kubenswrapper[31456]: I0312 21:28:56.399606 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.399998 master-0 kubenswrapper[31456]: I0312 21:28:56.399769 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.399998 master-0 kubenswrapper[31456]: I0312 21:28:56.399832 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.399998 master-0 kubenswrapper[31456]: I0312 21:28:56.399898 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jjxd\" (UniqueName: \"kubernetes.io/projected/8446bb87-ab49-4830-85e8-54f9ee4384cb-kube-api-access-7jjxd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.399998 master-0 kubenswrapper[31456]: I0312 21:28:56.399971 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.502565 master-0 kubenswrapper[31456]: I0312 21:28:56.502448 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.502565 master-0 kubenswrapper[31456]: I0312 21:28:56.502554 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.502565 master-0 kubenswrapper[31456]: I0312 21:28:56.502590 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jjxd\" (UniqueName: \"kubernetes.io/projected/8446bb87-ab49-4830-85e8-54f9ee4384cb-kube-api-access-7jjxd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.503229 master-0 kubenswrapper[31456]: I0312 21:28:56.502990 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.503229 master-0 kubenswrapper[31456]: I0312 21:28:56.503088 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.507322 master-0 kubenswrapper[31456]: I0312 21:28:56.507281 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.507491 master-0 kubenswrapper[31456]: I0312 21:28:56.507319 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.508593 master-0 kubenswrapper[31456]: I0312 21:28:56.508546 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.508846 master-0 kubenswrapper[31456]: I0312 21:28:56.508751 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8446bb87-ab49-4830-85e8-54f9ee4384cb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.518985 master-0 kubenswrapper[31456]: I0312 21:28:56.518125 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jjxd\" (UniqueName: \"kubernetes.io/projected/8446bb87-ab49-4830-85e8-54f9ee4384cb-kube-api-access-7jjxd\") pod \"nova-cell1-novncproxy-0\" (UID: \"8446bb87-ab49-4830-85e8-54f9ee4384cb\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:56.732295 master-0 kubenswrapper[31456]: I0312 21:28:56.732230 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:28:57.195434 master-0 kubenswrapper[31456]: I0312 21:28:57.195359 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905901a2-2e45-48ea-bedb-0712d96114ff" path="/var/lib/kubelet/pods/905901a2-2e45-48ea-bedb-0712d96114ff/volumes" Mar 12 21:28:57.362714 master-0 kubenswrapper[31456]: I0312 21:28:57.362640 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 21:28:57.955076 master-0 kubenswrapper[31456]: I0312 21:28:57.954933 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8446bb87-ab49-4830-85e8-54f9ee4384cb","Type":"ContainerStarted","Data":"3d210ee13d859d65b6149b2a03f67bca98e198c54d19ee0319fb775e3d7506d4"} Mar 12 21:28:57.955076 master-0 kubenswrapper[31456]: I0312 21:28:57.954986 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8446bb87-ab49-4830-85e8-54f9ee4384cb","Type":"ContainerStarted","Data":"e0af35b0fdf17d526e5efb025a9eb31384521826ceee243888728ad9fa04bb71"} Mar 12 21:28:57.985230 master-0 kubenswrapper[31456]: I0312 21:28:57.985104 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.985076003 podStartE2EDuration="1.985076003s" podCreationTimestamp="2026-03-12 21:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:28:57.981476016 +0000 UTC m=+1199.056081344" watchObservedRunningTime="2026-03-12 21:28:57.985076003 +0000 UTC m=+1199.059681371" Mar 12 21:28:59.477476 master-0 kubenswrapper[31456]: I0312 21:28:59.477412 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 21:28:59.494103 master-0 kubenswrapper[31456]: I0312 21:28:59.493984 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 21:28:59.499054 master-0 kubenswrapper[31456]: I0312 21:28:59.498990 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 21:28:59.996517 master-0 kubenswrapper[31456]: I0312 21:28:59.996399 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 21:29:01.733402 master-0 kubenswrapper[31456]: I0312 21:29:01.733207 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:29:02.227022 master-0 kubenswrapper[31456]: I0312 21:29:02.226951 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 21:29:02.228242 master-0 kubenswrapper[31456]: I0312 21:29:02.228190 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 21:29:02.229969 master-0 kubenswrapper[31456]: I0312 21:29:02.229869 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 21:29:02.233263 master-0 kubenswrapper[31456]: I0312 21:29:02.233193 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 21:29:03.033547 master-0 kubenswrapper[31456]: I0312 21:29:03.033480 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 21:29:03.044068 master-0 kubenswrapper[31456]: I0312 21:29:03.042969 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 21:29:03.348819 master-0 kubenswrapper[31456]: I0312 21:29:03.348735 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb4c5b697-hrp87"] Mar 12 21:29:03.354024 master-0 kubenswrapper[31456]: I0312 21:29:03.353977 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.419470 master-0 kubenswrapper[31456]: I0312 21:29:03.419328 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb4c5b697-hrp87"] Mar 12 21:29:03.444494 master-0 kubenswrapper[31456]: I0312 21:29:03.444170 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-config\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.444494 master-0 kubenswrapper[31456]: I0312 21:29:03.444384 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-dns-svc\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.444494 master-0 kubenswrapper[31456]: I0312 21:29:03.444421 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94rth\" (UniqueName: \"kubernetes.io/projected/054a42b8-c2cf-42fe-8257-07620f3de378-kube-api-access-94rth\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.444840 master-0 kubenswrapper[31456]: I0312 21:29:03.444551 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.444840 master-0 kubenswrapper[31456]: I0312 21:29:03.444581 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.444840 master-0 kubenswrapper[31456]: I0312 21:29:03.444725 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.547975 master-0 kubenswrapper[31456]: I0312 21:29:03.547887 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.547975 master-0 kubenswrapper[31456]: I0312 21:29:03.547992 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.548262 master-0 kubenswrapper[31456]: I0312 21:29:03.548125 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.548262 master-0 kubenswrapper[31456]: I0312 21:29:03.548212 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-config\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.548794 master-0 kubenswrapper[31456]: I0312 21:29:03.548765 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-dns-svc\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.548958 master-0 kubenswrapper[31456]: I0312 21:29:03.548836 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94rth\" (UniqueName: \"kubernetes.io/projected/054a42b8-c2cf-42fe-8257-07620f3de378-kube-api-access-94rth\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.549158 master-0 kubenswrapper[31456]: I0312 21:29:03.549108 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-dns-swift-storage-0\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.549960 master-0 kubenswrapper[31456]: I0312 21:29:03.549931 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.550372 master-0 kubenswrapper[31456]: I0312 21:29:03.550338 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-config\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.550468 master-0 kubenswrapper[31456]: I0312 21:29:03.550338 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-dns-svc\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.550677 master-0 kubenswrapper[31456]: I0312 21:29:03.550644 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/054a42b8-c2cf-42fe-8257-07620f3de378-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.568275 master-0 kubenswrapper[31456]: I0312 21:29:03.568114 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94rth\" (UniqueName: \"kubernetes.io/projected/054a42b8-c2cf-42fe-8257-07620f3de378-kube-api-access-94rth\") pod \"dnsmasq-dns-5bb4c5b697-hrp87\" (UID: \"054a42b8-c2cf-42fe-8257-07620f3de378\") " pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:03.708085 master-0 kubenswrapper[31456]: I0312 21:29:03.707931 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:04.175948 master-0 kubenswrapper[31456]: I0312 21:29:04.165536 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb4c5b697-hrp87"] Mar 12 21:29:05.162417 master-0 kubenswrapper[31456]: I0312 21:29:05.162041 31456 generic.go:334] "Generic (PLEG): container finished" podID="054a42b8-c2cf-42fe-8257-07620f3de378" containerID="9f508314fc667c851bb321a57fec8624e6a0b6fab1777af42f7f58cd67e32c32" exitCode=0 Mar 12 21:29:05.162742 master-0 kubenswrapper[31456]: I0312 21:29:05.162295 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" event={"ID":"054a42b8-c2cf-42fe-8257-07620f3de378","Type":"ContainerDied","Data":"9f508314fc667c851bb321a57fec8624e6a0b6fab1777af42f7f58cd67e32c32"} Mar 12 21:29:05.162865 master-0 kubenswrapper[31456]: I0312 21:29:05.162740 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" event={"ID":"054a42b8-c2cf-42fe-8257-07620f3de378","Type":"ContainerStarted","Data":"d6be3ab023b8eb527df0e1dd393ae10f423880ac30968c6569f7481ae1be22d2"} Mar 12 21:29:06.182531 master-0 kubenswrapper[31456]: I0312 21:29:06.182440 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" event={"ID":"054a42b8-c2cf-42fe-8257-07620f3de378","Type":"ContainerStarted","Data":"7344fe9501d8520c03ebd1c9372ccb9e3ee32da389973990b83ebdfa73f834eb"} Mar 12 21:29:06.184491 master-0 kubenswrapper[31456]: I0312 21:29:06.184440 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:06.219536 master-0 kubenswrapper[31456]: I0312 21:29:06.219362 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" podStartSLOduration=3.219328906 podStartE2EDuration="3.219328906s" podCreationTimestamp="2026-03-12 21:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:06.203279398 +0000 UTC m=+1207.277884746" watchObservedRunningTime="2026-03-12 21:29:06.219328906 +0000 UTC m=+1207.293934244" Mar 12 21:29:06.415239 master-0 kubenswrapper[31456]: I0312 21:29:06.415143 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:06.415529 master-0 kubenswrapper[31456]: I0312 21:29:06.415462 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-log" containerID="cri-o://2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021" gracePeriod=30 Mar 12 21:29:06.415788 master-0 kubenswrapper[31456]: I0312 21:29:06.415691 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-api" containerID="cri-o://a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014" gracePeriod=30 Mar 12 21:29:06.733363 master-0 kubenswrapper[31456]: I0312 21:29:06.733236 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:29:06.756368 master-0 kubenswrapper[31456]: I0312 21:29:06.756308 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:29:07.195203 master-0 kubenswrapper[31456]: I0312 21:29:07.195108 31456 generic.go:334] "Generic (PLEG): container finished" podID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerID="2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021" exitCode=143 Mar 12 21:29:07.196052 master-0 kubenswrapper[31456]: I0312 21:29:07.195963 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e5d21ea-b20b-4112-8311-c9fc0cc86034","Type":"ContainerDied","Data":"2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021"} Mar 12 21:29:07.260134 master-0 kubenswrapper[31456]: I0312 21:29:07.260053 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 21:29:07.637832 master-0 kubenswrapper[31456]: I0312 21:29:07.631882 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5sjm6"] Mar 12 21:29:07.637832 master-0 kubenswrapper[31456]: I0312 21:29:07.634282 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.648940 master-0 kubenswrapper[31456]: I0312 21:29:07.647287 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 21:29:07.648940 master-0 kubenswrapper[31456]: I0312 21:29:07.648144 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 21:29:07.666269 master-0 kubenswrapper[31456]: I0312 21:29:07.666205 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-jmhq9"] Mar 12 21:29:07.668474 master-0 kubenswrapper[31456]: I0312 21:29:07.668442 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.681229 master-0 kubenswrapper[31456]: I0312 21:29:07.681165 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-combined-ca-bundle\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.681398 master-0 kubenswrapper[31456]: I0312 21:29:07.681295 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-config-data\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.681398 master-0 kubenswrapper[31456]: I0312 21:29:07.681361 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwtlg\" (UniqueName: \"kubernetes.io/projected/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-kube-api-access-mwtlg\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.681473 master-0 kubenswrapper[31456]: I0312 21:29:07.681403 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-scripts\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.681473 master-0 kubenswrapper[31456]: I0312 21:29:07.681447 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-config-data\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.681473 master-0 kubenswrapper[31456]: I0312 21:29:07.681467 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.681567 master-0 kubenswrapper[31456]: I0312 21:29:07.681503 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8jp\" (UniqueName: \"kubernetes.io/projected/e015d284-5458-4e15-aa69-5a3dcc87352c-kube-api-access-xj8jp\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.681567 master-0 kubenswrapper[31456]: I0312 21:29:07.681522 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-scripts\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.693872 master-0 kubenswrapper[31456]: I0312 21:29:07.693525 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5sjm6"] Mar 12 21:29:07.708522 master-0 kubenswrapper[31456]: I0312 21:29:07.708251 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-jmhq9"] Mar 12 21:29:07.782726 master-0 kubenswrapper[31456]: I0312 21:29:07.782669 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-combined-ca-bundle\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.783081 master-0 kubenswrapper[31456]: I0312 21:29:07.783064 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-config-data\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.783216 master-0 kubenswrapper[31456]: I0312 21:29:07.783201 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwtlg\" (UniqueName: \"kubernetes.io/projected/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-kube-api-access-mwtlg\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.783321 master-0 kubenswrapper[31456]: I0312 21:29:07.783307 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-scripts\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.783434 master-0 kubenswrapper[31456]: I0312 21:29:07.783419 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-config-data\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.783511 master-0 kubenswrapper[31456]: I0312 21:29:07.783497 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.783613 master-0 kubenswrapper[31456]: I0312 21:29:07.783598 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8jp\" (UniqueName: \"kubernetes.io/projected/e015d284-5458-4e15-aa69-5a3dcc87352c-kube-api-access-xj8jp\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.783695 master-0 kubenswrapper[31456]: I0312 21:29:07.783682 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-scripts\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.786337 master-0 kubenswrapper[31456]: I0312 21:29:07.786302 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-combined-ca-bundle\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.786715 master-0 kubenswrapper[31456]: I0312 21:29:07.786679 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-scripts\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.788042 master-0 kubenswrapper[31456]: I0312 21:29:07.787986 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-scripts\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.789468 master-0 kubenswrapper[31456]: I0312 21:29:07.789422 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.789643 master-0 kubenswrapper[31456]: I0312 21:29:07.789608 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-config-data\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.794392 master-0 kubenswrapper[31456]: I0312 21:29:07.794320 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-config-data\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.809357 master-0 kubenswrapper[31456]: I0312 21:29:07.809310 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8jp\" (UniqueName: \"kubernetes.io/projected/e015d284-5458-4e15-aa69-5a3dcc87352c-kube-api-access-xj8jp\") pod \"nova-cell1-host-discover-jmhq9\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:07.809772 master-0 kubenswrapper[31456]: I0312 21:29:07.809725 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwtlg\" (UniqueName: \"kubernetes.io/projected/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-kube-api-access-mwtlg\") pod \"nova-cell1-cell-mapping-5sjm6\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:07.995941 master-0 kubenswrapper[31456]: I0312 21:29:07.995629 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:08.011692 master-0 kubenswrapper[31456]: I0312 21:29:08.010006 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:08.578911 master-0 kubenswrapper[31456]: I0312 21:29:08.575869 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5sjm6"] Mar 12 21:29:08.579588 master-0 kubenswrapper[31456]: W0312 21:29:08.579005 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdcffde0_88dd_46b8_ab9d_224e83dd4a08.slice/crio-478f6d9b58b4d39b1d7fa49cf104a8bbea0036626824e57b87d1dbaf1bd81e04 WatchSource:0}: Error finding container 478f6d9b58b4d39b1d7fa49cf104a8bbea0036626824e57b87d1dbaf1bd81e04: Status 404 returned error can't find the container with id 478f6d9b58b4d39b1d7fa49cf104a8bbea0036626824e57b87d1dbaf1bd81e04 Mar 12 21:29:08.719573 master-0 kubenswrapper[31456]: I0312 21:29:08.719499 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-jmhq9"] Mar 12 21:29:09.235082 master-0 kubenswrapper[31456]: I0312 21:29:09.234932 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jmhq9" event={"ID":"e015d284-5458-4e15-aa69-5a3dcc87352c","Type":"ContainerStarted","Data":"9874ffdc3019436ca7eec6399f744ac626c08f5049181744720f39d119ff32cf"} Mar 12 21:29:09.235082 master-0 kubenswrapper[31456]: I0312 21:29:09.235029 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jmhq9" event={"ID":"e015d284-5458-4e15-aa69-5a3dcc87352c","Type":"ContainerStarted","Data":"ad56d245b32cd495a227aa3583a4f9e799b02aa25d0dbbf19caf9b17c64a0a96"} Mar 12 21:29:09.239689 master-0 kubenswrapper[31456]: I0312 21:29:09.239615 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5sjm6" event={"ID":"fdcffde0-88dd-46b8-ab9d-224e83dd4a08","Type":"ContainerStarted","Data":"904cbbf00798101e94702bfdae155878aa1d030bbb37b4671ef94695e647aa14"} Mar 12 21:29:09.239856 master-0 kubenswrapper[31456]: I0312 21:29:09.239691 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5sjm6" event={"ID":"fdcffde0-88dd-46b8-ab9d-224e83dd4a08","Type":"ContainerStarted","Data":"478f6d9b58b4d39b1d7fa49cf104a8bbea0036626824e57b87d1dbaf1bd81e04"} Mar 12 21:29:09.292325 master-0 kubenswrapper[31456]: I0312 21:29:09.292217 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5sjm6" podStartSLOduration=2.292189708 podStartE2EDuration="2.292189708s" podCreationTimestamp="2026-03-12 21:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:09.275619947 +0000 UTC m=+1210.350225275" watchObservedRunningTime="2026-03-12 21:29:09.292189708 +0000 UTC m=+1210.366795046" Mar 12 21:29:10.272869 master-0 kubenswrapper[31456]: I0312 21:29:10.271277 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:10.285563 master-0 kubenswrapper[31456]: I0312 21:29:10.285525 31456 generic.go:334] "Generic (PLEG): container finished" podID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerID="a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014" exitCode=0 Mar 12 21:29:10.285964 master-0 kubenswrapper[31456]: I0312 21:29:10.285922 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e5d21ea-b20b-4112-8311-c9fc0cc86034","Type":"ContainerDied","Data":"a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014"} Mar 12 21:29:10.286039 master-0 kubenswrapper[31456]: I0312 21:29:10.285956 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:10.286039 master-0 kubenswrapper[31456]: I0312 21:29:10.285989 31456 scope.go:117] "RemoveContainer" containerID="a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014" Mar 12 21:29:10.286106 master-0 kubenswrapper[31456]: I0312 21:29:10.285977 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e5d21ea-b20b-4112-8311-c9fc0cc86034","Type":"ContainerDied","Data":"f70b7d50cead0cae1553c35c86d1115983b536410c33c84ae46b4e4d3bf912a3"} Mar 12 21:29:10.300903 master-0 kubenswrapper[31456]: I0312 21:29:10.300790 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-jmhq9" podStartSLOduration=3.3007738890000002 podStartE2EDuration="3.300773889s" podCreationTimestamp="2026-03-12 21:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:09.300015457 +0000 UTC m=+1210.374620785" watchObservedRunningTime="2026-03-12 21:29:10.300773889 +0000 UTC m=+1211.375379217" Mar 12 21:29:10.331607 master-0 kubenswrapper[31456]: I0312 21:29:10.331551 31456 scope.go:117] "RemoveContainer" containerID="2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021" Mar 12 21:29:10.381646 master-0 kubenswrapper[31456]: I0312 21:29:10.376099 31456 scope.go:117] "RemoveContainer" containerID="a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014" Mar 12 21:29:10.383118 master-0 kubenswrapper[31456]: E0312 21:29:10.382775 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014\": container with ID starting with a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014 not found: ID does not exist" containerID="a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014" Mar 12 21:29:10.383118 master-0 kubenswrapper[31456]: I0312 21:29:10.382866 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014"} err="failed to get container status \"a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014\": rpc error: code = NotFound desc = could not find container \"a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014\": container with ID starting with a8ed240d0e00fe7df697be7603ceb8fd83b9087dafbb968bf1d5eea2f1a5d014 not found: ID does not exist" Mar 12 21:29:10.383118 master-0 kubenswrapper[31456]: I0312 21:29:10.382907 31456 scope.go:117] "RemoveContainer" containerID="2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021" Mar 12 21:29:10.385715 master-0 kubenswrapper[31456]: I0312 21:29:10.385017 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-config-data\") pod \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " Mar 12 21:29:10.385715 master-0 kubenswrapper[31456]: I0312 21:29:10.385429 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-combined-ca-bundle\") pod \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " Mar 12 21:29:10.386067 master-0 kubenswrapper[31456]: E0312 21:29:10.385955 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021\": container with ID starting with 2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021 not found: ID does not exist" containerID="2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021" Mar 12 21:29:10.386067 master-0 kubenswrapper[31456]: I0312 21:29:10.386007 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021"} err="failed to get container status \"2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021\": rpc error: code = NotFound desc = could not find container \"2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021\": container with ID starting with 2b0ed075959f2bafac387d2e944d5294e8aa44cc177d3b6f7ab0637dac7fd021 not found: ID does not exist" Mar 12 21:29:10.386848 master-0 kubenswrapper[31456]: I0312 21:29:10.386642 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5d21ea-b20b-4112-8311-c9fc0cc86034-logs\") pod \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " Mar 12 21:29:10.387125 master-0 kubenswrapper[31456]: I0312 21:29:10.386801 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59hgx\" (UniqueName: \"kubernetes.io/projected/1e5d21ea-b20b-4112-8311-c9fc0cc86034-kube-api-access-59hgx\") pod \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\" (UID: \"1e5d21ea-b20b-4112-8311-c9fc0cc86034\") " Mar 12 21:29:10.390600 master-0 kubenswrapper[31456]: I0312 21:29:10.389976 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5d21ea-b20b-4112-8311-c9fc0cc86034-logs" (OuterVolumeSpecName: "logs") pod "1e5d21ea-b20b-4112-8311-c9fc0cc86034" (UID: "1e5d21ea-b20b-4112-8311-c9fc0cc86034"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:29:10.413397 master-0 kubenswrapper[31456]: I0312 21:29:10.413322 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5d21ea-b20b-4112-8311-c9fc0cc86034-kube-api-access-59hgx" (OuterVolumeSpecName: "kube-api-access-59hgx") pod "1e5d21ea-b20b-4112-8311-c9fc0cc86034" (UID: "1e5d21ea-b20b-4112-8311-c9fc0cc86034"). InnerVolumeSpecName "kube-api-access-59hgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:10.485093 master-0 kubenswrapper[31456]: I0312 21:29:10.485003 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e5d21ea-b20b-4112-8311-c9fc0cc86034" (UID: "1e5d21ea-b20b-4112-8311-c9fc0cc86034"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:10.504863 master-0 kubenswrapper[31456]: I0312 21:29:10.488024 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-config-data" (OuterVolumeSpecName: "config-data") pod "1e5d21ea-b20b-4112-8311-c9fc0cc86034" (UID: "1e5d21ea-b20b-4112-8311-c9fc0cc86034"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:10.509836 master-0 kubenswrapper[31456]: I0312 21:29:10.508355 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59hgx\" (UniqueName: \"kubernetes.io/projected/1e5d21ea-b20b-4112-8311-c9fc0cc86034-kube-api-access-59hgx\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:10.509836 master-0 kubenswrapper[31456]: I0312 21:29:10.508410 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:10.509836 master-0 kubenswrapper[31456]: I0312 21:29:10.508421 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5d21ea-b20b-4112-8311-c9fc0cc86034-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:10.509836 master-0 kubenswrapper[31456]: I0312 21:29:10.508431 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e5d21ea-b20b-4112-8311-c9fc0cc86034-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:10.645839 master-0 kubenswrapper[31456]: I0312 21:29:10.645478 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:10.664977 master-0 kubenswrapper[31456]: I0312 21:29:10.664267 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:10.691838 master-0 kubenswrapper[31456]: I0312 21:29:10.691556 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:10.693746 master-0 kubenswrapper[31456]: E0312 21:29:10.692142 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-log" Mar 12 21:29:10.693746 master-0 kubenswrapper[31456]: I0312 21:29:10.692163 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-log" Mar 12 21:29:10.693746 master-0 kubenswrapper[31456]: E0312 21:29:10.692214 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-api" Mar 12 21:29:10.693746 master-0 kubenswrapper[31456]: I0312 21:29:10.692222 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-api" Mar 12 21:29:10.693746 master-0 kubenswrapper[31456]: I0312 21:29:10.692446 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-log" Mar 12 21:29:10.693746 master-0 kubenswrapper[31456]: I0312 21:29:10.692468 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" containerName="nova-api-api" Mar 12 21:29:10.697829 master-0 kubenswrapper[31456]: I0312 21:29:10.694639 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:10.700856 master-0 kubenswrapper[31456]: I0312 21:29:10.699127 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 21:29:10.700856 master-0 kubenswrapper[31456]: I0312 21:29:10.699386 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 21:29:10.700856 master-0 kubenswrapper[31456]: I0312 21:29:10.699868 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 21:29:10.758382 master-0 kubenswrapper[31456]: I0312 21:29:10.758305 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:10.873824 master-0 kubenswrapper[31456]: I0312 21:29:10.850162 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-config-data\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.873824 master-0 kubenswrapper[31456]: I0312 21:29:10.850289 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.873824 master-0 kubenswrapper[31456]: I0312 21:29:10.850399 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch52k\" (UniqueName: \"kubernetes.io/projected/ad5208d6-b244-47bf-80a9-bcae81587171-kube-api-access-ch52k\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.873824 master-0 kubenswrapper[31456]: I0312 21:29:10.850512 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad5208d6-b244-47bf-80a9-bcae81587171-logs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.873824 master-0 kubenswrapper[31456]: I0312 21:29:10.850563 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.873824 master-0 kubenswrapper[31456]: I0312 21:29:10.850904 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.956252 master-0 kubenswrapper[31456]: I0312 21:29:10.956169 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad5208d6-b244-47bf-80a9-bcae81587171-logs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.956486 master-0 kubenswrapper[31456]: I0312 21:29:10.956337 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.956530 master-0 kubenswrapper[31456]: I0312 21:29:10.956508 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.956798 master-0 kubenswrapper[31456]: I0312 21:29:10.956772 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-config-data\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.956897 master-0 kubenswrapper[31456]: I0312 21:29:10.956875 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.957174 master-0 kubenswrapper[31456]: I0312 21:29:10.957082 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad5208d6-b244-47bf-80a9-bcae81587171-logs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.957174 master-0 kubenswrapper[31456]: I0312 21:29:10.957133 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch52k\" (UniqueName: \"kubernetes.io/projected/ad5208d6-b244-47bf-80a9-bcae81587171-kube-api-access-ch52k\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.964293 master-0 kubenswrapper[31456]: I0312 21:29:10.964236 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.964980 master-0 kubenswrapper[31456]: I0312 21:29:10.964943 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.965679 master-0 kubenswrapper[31456]: I0312 21:29:10.965643 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-config-data\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.966652 master-0 kubenswrapper[31456]: I0312 21:29:10.966483 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:10.975366 master-0 kubenswrapper[31456]: I0312 21:29:10.975321 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch52k\" (UniqueName: \"kubernetes.io/projected/ad5208d6-b244-47bf-80a9-bcae81587171-kube-api-access-ch52k\") pod \"nova-api-0\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " pod="openstack/nova-api-0" Mar 12 21:29:11.080153 master-0 kubenswrapper[31456]: I0312 21:29:11.080090 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:11.209313 master-0 kubenswrapper[31456]: I0312 21:29:11.209259 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5d21ea-b20b-4112-8311-c9fc0cc86034" path="/var/lib/kubelet/pods/1e5d21ea-b20b-4112-8311-c9fc0cc86034/volumes" Mar 12 21:29:11.655821 master-0 kubenswrapper[31456]: I0312 21:29:11.655379 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:12.377587 master-0 kubenswrapper[31456]: I0312 21:29:12.377523 31456 generic.go:334] "Generic (PLEG): container finished" podID="e015d284-5458-4e15-aa69-5a3dcc87352c" containerID="9874ffdc3019436ca7eec6399f744ac626c08f5049181744720f39d119ff32cf" exitCode=0 Mar 12 21:29:12.377906 master-0 kubenswrapper[31456]: I0312 21:29:12.377648 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jmhq9" event={"ID":"e015d284-5458-4e15-aa69-5a3dcc87352c","Type":"ContainerDied","Data":"9874ffdc3019436ca7eec6399f744ac626c08f5049181744720f39d119ff32cf"} Mar 12 21:29:12.381499 master-0 kubenswrapper[31456]: I0312 21:29:12.381453 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad5208d6-b244-47bf-80a9-bcae81587171","Type":"ContainerStarted","Data":"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308"} Mar 12 21:29:12.381592 master-0 kubenswrapper[31456]: I0312 21:29:12.381510 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad5208d6-b244-47bf-80a9-bcae81587171","Type":"ContainerStarted","Data":"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f"} Mar 12 21:29:12.381592 master-0 kubenswrapper[31456]: I0312 21:29:12.381534 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad5208d6-b244-47bf-80a9-bcae81587171","Type":"ContainerStarted","Data":"bcfb156a3d3830a6d4d7e0d210daa6e8710dcf72cc98cd0404dd928f39b00dbc"} Mar 12 21:29:12.443921 master-0 kubenswrapper[31456]: I0312 21:29:12.443800 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.4437732260000002 podStartE2EDuration="2.443773226s" podCreationTimestamp="2026-03-12 21:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:12.441206664 +0000 UTC m=+1213.515811992" watchObservedRunningTime="2026-03-12 21:29:12.443773226 +0000 UTC m=+1213.518378554" Mar 12 21:29:13.710011 master-0 kubenswrapper[31456]: I0312 21:29:13.709927 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bb4c5b697-hrp87" Mar 12 21:29:13.849112 master-0 kubenswrapper[31456]: I0312 21:29:13.847189 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bffd747-5b96l"] Mar 12 21:29:13.849112 master-0 kubenswrapper[31456]: I0312 21:29:13.847450 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76bffd747-5b96l" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" containerName="dnsmasq-dns" containerID="cri-o://a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531" gracePeriod=10 Mar 12 21:29:14.062660 master-0 kubenswrapper[31456]: I0312 21:29:14.062624 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:14.145169 master-0 kubenswrapper[31456]: I0312 21:29:14.145039 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-config-data\") pod \"e015d284-5458-4e15-aa69-5a3dcc87352c\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " Mar 12 21:29:14.145408 master-0 kubenswrapper[31456]: I0312 21:29:14.145229 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-scripts\") pod \"e015d284-5458-4e15-aa69-5a3dcc87352c\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " Mar 12 21:29:14.145408 master-0 kubenswrapper[31456]: I0312 21:29:14.145260 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj8jp\" (UniqueName: \"kubernetes.io/projected/e015d284-5458-4e15-aa69-5a3dcc87352c-kube-api-access-xj8jp\") pod \"e015d284-5458-4e15-aa69-5a3dcc87352c\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " Mar 12 21:29:14.145408 master-0 kubenswrapper[31456]: I0312 21:29:14.145309 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-combined-ca-bundle\") pod \"e015d284-5458-4e15-aa69-5a3dcc87352c\" (UID: \"e015d284-5458-4e15-aa69-5a3dcc87352c\") " Mar 12 21:29:14.150716 master-0 kubenswrapper[31456]: I0312 21:29:14.150658 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e015d284-5458-4e15-aa69-5a3dcc87352c-kube-api-access-xj8jp" (OuterVolumeSpecName: "kube-api-access-xj8jp") pod "e015d284-5458-4e15-aa69-5a3dcc87352c" (UID: "e015d284-5458-4e15-aa69-5a3dcc87352c"). InnerVolumeSpecName "kube-api-access-xj8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:14.156465 master-0 kubenswrapper[31456]: I0312 21:29:14.156338 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-scripts" (OuterVolumeSpecName: "scripts") pod "e015d284-5458-4e15-aa69-5a3dcc87352c" (UID: "e015d284-5458-4e15-aa69-5a3dcc87352c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:14.178533 master-0 kubenswrapper[31456]: I0312 21:29:14.175584 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e015d284-5458-4e15-aa69-5a3dcc87352c" (UID: "e015d284-5458-4e15-aa69-5a3dcc87352c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:14.196774 master-0 kubenswrapper[31456]: I0312 21:29:14.196708 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-config-data" (OuterVolumeSpecName: "config-data") pod "e015d284-5458-4e15-aa69-5a3dcc87352c" (UID: "e015d284-5458-4e15-aa69-5a3dcc87352c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:14.252089 master-0 kubenswrapper[31456]: I0312 21:29:14.251892 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.252089 master-0 kubenswrapper[31456]: I0312 21:29:14.251932 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.252089 master-0 kubenswrapper[31456]: I0312 21:29:14.251942 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj8jp\" (UniqueName: \"kubernetes.io/projected/e015d284-5458-4e15-aa69-5a3dcc87352c-kube-api-access-xj8jp\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.252089 master-0 kubenswrapper[31456]: I0312 21:29:14.251952 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e015d284-5458-4e15-aa69-5a3dcc87352c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.369962 master-0 kubenswrapper[31456]: I0312 21:29:14.369922 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:29:14.433119 master-0 kubenswrapper[31456]: I0312 21:29:14.433046 31456 generic.go:334] "Generic (PLEG): container finished" podID="fdcffde0-88dd-46b8-ab9d-224e83dd4a08" containerID="904cbbf00798101e94702bfdae155878aa1d030bbb37b4671ef94695e647aa14" exitCode=0 Mar 12 21:29:14.433119 master-0 kubenswrapper[31456]: I0312 21:29:14.433100 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5sjm6" event={"ID":"fdcffde0-88dd-46b8-ab9d-224e83dd4a08","Type":"ContainerDied","Data":"904cbbf00798101e94702bfdae155878aa1d030bbb37b4671ef94695e647aa14"} Mar 12 21:29:14.444030 master-0 kubenswrapper[31456]: I0312 21:29:14.443970 31456 generic.go:334] "Generic (PLEG): container finished" podID="d93e5d01-b4a5-4612-bded-2615337961dc" containerID="a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531" exitCode=0 Mar 12 21:29:14.444170 master-0 kubenswrapper[31456]: I0312 21:29:14.444094 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bffd747-5b96l" event={"ID":"d93e5d01-b4a5-4612-bded-2615337961dc","Type":"ContainerDied","Data":"a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531"} Mar 12 21:29:14.444170 master-0 kubenswrapper[31456]: I0312 21:29:14.444125 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bffd747-5b96l" event={"ID":"d93e5d01-b4a5-4612-bded-2615337961dc","Type":"ContainerDied","Data":"af18daabc6ba0586c4572d464a511c5cedc36f7aff7b36816a6b6af6542604e2"} Mar 12 21:29:14.444170 master-0 kubenswrapper[31456]: I0312 21:29:14.444143 31456 scope.go:117] "RemoveContainer" containerID="a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531" Mar 12 21:29:14.444373 master-0 kubenswrapper[31456]: I0312 21:29:14.444340 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bffd747-5b96l" Mar 12 21:29:14.449960 master-0 kubenswrapper[31456]: I0312 21:29:14.449863 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jmhq9" event={"ID":"e015d284-5458-4e15-aa69-5a3dcc87352c","Type":"ContainerDied","Data":"ad56d245b32cd495a227aa3583a4f9e799b02aa25d0dbbf19caf9b17c64a0a96"} Mar 12 21:29:14.450032 master-0 kubenswrapper[31456]: I0312 21:29:14.449968 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad56d245b32cd495a227aa3583a4f9e799b02aa25d0dbbf19caf9b17c64a0a96" Mar 12 21:29:14.450032 master-0 kubenswrapper[31456]: I0312 21:29:14.449934 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jmhq9" Mar 12 21:29:14.457410 master-0 kubenswrapper[31456]: I0312 21:29:14.457337 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-nb\") pod \"d93e5d01-b4a5-4612-bded-2615337961dc\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " Mar 12 21:29:14.457610 master-0 kubenswrapper[31456]: I0312 21:29:14.457541 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-svc\") pod \"d93e5d01-b4a5-4612-bded-2615337961dc\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " Mar 12 21:29:14.457998 master-0 kubenswrapper[31456]: I0312 21:29:14.457887 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-swift-storage-0\") pod \"d93e5d01-b4a5-4612-bded-2615337961dc\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " Mar 12 21:29:14.457998 master-0 kubenswrapper[31456]: I0312 21:29:14.457931 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-config\") pod \"d93e5d01-b4a5-4612-bded-2615337961dc\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " Mar 12 21:29:14.458090 master-0 kubenswrapper[31456]: I0312 21:29:14.458033 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9ckr\" (UniqueName: \"kubernetes.io/projected/d93e5d01-b4a5-4612-bded-2615337961dc-kube-api-access-m9ckr\") pod \"d93e5d01-b4a5-4612-bded-2615337961dc\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " Mar 12 21:29:14.458137 master-0 kubenswrapper[31456]: I0312 21:29:14.458113 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-sb\") pod \"d93e5d01-b4a5-4612-bded-2615337961dc\" (UID: \"d93e5d01-b4a5-4612-bded-2615337961dc\") " Mar 12 21:29:14.517862 master-0 kubenswrapper[31456]: I0312 21:29:14.517363 31456 scope.go:117] "RemoveContainer" containerID="fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd" Mar 12 21:29:14.521565 master-0 kubenswrapper[31456]: I0312 21:29:14.521496 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93e5d01-b4a5-4612-bded-2615337961dc-kube-api-access-m9ckr" (OuterVolumeSpecName: "kube-api-access-m9ckr") pod "d93e5d01-b4a5-4612-bded-2615337961dc" (UID: "d93e5d01-b4a5-4612-bded-2615337961dc"). InnerVolumeSpecName "kube-api-access-m9ckr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:14.560848 master-0 kubenswrapper[31456]: I0312 21:29:14.560588 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9ckr\" (UniqueName: \"kubernetes.io/projected/d93e5d01-b4a5-4612-bded-2615337961dc-kube-api-access-m9ckr\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.562984 master-0 kubenswrapper[31456]: I0312 21:29:14.562936 31456 scope.go:117] "RemoveContainer" containerID="a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531" Mar 12 21:29:14.563351 master-0 kubenswrapper[31456]: E0312 21:29:14.563313 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531\": container with ID starting with a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531 not found: ID does not exist" containerID="a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531" Mar 12 21:29:14.563412 master-0 kubenswrapper[31456]: I0312 21:29:14.563351 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531"} err="failed to get container status \"a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531\": rpc error: code = NotFound desc = could not find container \"a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531\": container with ID starting with a302426da4466b11828c935809f2ff48ba6dac0f5677b8073b1e9beb93e81531 not found: ID does not exist" Mar 12 21:29:14.563412 master-0 kubenswrapper[31456]: I0312 21:29:14.563374 31456 scope.go:117] "RemoveContainer" containerID="fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd" Mar 12 21:29:14.563687 master-0 kubenswrapper[31456]: E0312 21:29:14.563655 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd\": container with ID starting with fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd not found: ID does not exist" containerID="fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd" Mar 12 21:29:14.563740 master-0 kubenswrapper[31456]: I0312 21:29:14.563683 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd"} err="failed to get container status \"fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd\": rpc error: code = NotFound desc = could not find container \"fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd\": container with ID starting with fededd9c7ecda07ceb92e903441a43a7ae4210353f6cecaaa38ac3bbe41396dd not found: ID does not exist" Mar 12 21:29:14.585946 master-0 kubenswrapper[31456]: I0312 21:29:14.580507 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d93e5d01-b4a5-4612-bded-2615337961dc" (UID: "d93e5d01-b4a5-4612-bded-2615337961dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:29:14.585946 master-0 kubenswrapper[31456]: I0312 21:29:14.580731 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-config" (OuterVolumeSpecName: "config") pod "d93e5d01-b4a5-4612-bded-2615337961dc" (UID: "d93e5d01-b4a5-4612-bded-2615337961dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:29:14.588631 master-0 kubenswrapper[31456]: I0312 21:29:14.588558 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d93e5d01-b4a5-4612-bded-2615337961dc" (UID: "d93e5d01-b4a5-4612-bded-2615337961dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:29:14.591606 master-0 kubenswrapper[31456]: I0312 21:29:14.591319 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d93e5d01-b4a5-4612-bded-2615337961dc" (UID: "d93e5d01-b4a5-4612-bded-2615337961dc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:29:14.601161 master-0 kubenswrapper[31456]: I0312 21:29:14.601095 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d93e5d01-b4a5-4612-bded-2615337961dc" (UID: "d93e5d01-b4a5-4612-bded-2615337961dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:29:14.662374 master-0 kubenswrapper[31456]: I0312 21:29:14.662266 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.662374 master-0 kubenswrapper[31456]: I0312 21:29:14.662316 31456 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.662374 master-0 kubenswrapper[31456]: I0312 21:29:14.662330 31456 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.662374 master-0 kubenswrapper[31456]: I0312 21:29:14.662340 31456 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.662374 master-0 kubenswrapper[31456]: I0312 21:29:14.662351 31456 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93e5d01-b4a5-4612-bded-2615337961dc-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:14.838948 master-0 kubenswrapper[31456]: I0312 21:29:14.838653 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bffd747-5b96l"] Mar 12 21:29:14.851122 master-0 kubenswrapper[31456]: I0312 21:29:14.851072 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76bffd747-5b96l"] Mar 12 21:29:15.181728 master-0 kubenswrapper[31456]: I0312 21:29:15.181676 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" path="/var/lib/kubelet/pods/d93e5d01-b4a5-4612-bded-2615337961dc/volumes" Mar 12 21:29:15.952937 master-0 kubenswrapper[31456]: I0312 21:29:15.952866 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:16.003563 master-0 kubenswrapper[31456]: I0312 21:29:15.996141 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-config-data\") pod \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " Mar 12 21:29:16.003563 master-0 kubenswrapper[31456]: I0312 21:29:15.996212 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-combined-ca-bundle\") pod \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " Mar 12 21:29:16.003563 master-0 kubenswrapper[31456]: I0312 21:29:15.996302 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwtlg\" (UniqueName: \"kubernetes.io/projected/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-kube-api-access-mwtlg\") pod \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " Mar 12 21:29:16.003563 master-0 kubenswrapper[31456]: I0312 21:29:15.996461 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-scripts\") pod \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\" (UID: \"fdcffde0-88dd-46b8-ab9d-224e83dd4a08\") " Mar 12 21:29:16.003563 master-0 kubenswrapper[31456]: I0312 21:29:16.000301 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-scripts" (OuterVolumeSpecName: "scripts") pod "fdcffde0-88dd-46b8-ab9d-224e83dd4a08" (UID: "fdcffde0-88dd-46b8-ab9d-224e83dd4a08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:16.020099 master-0 kubenswrapper[31456]: I0312 21:29:16.019977 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-kube-api-access-mwtlg" (OuterVolumeSpecName: "kube-api-access-mwtlg") pod "fdcffde0-88dd-46b8-ab9d-224e83dd4a08" (UID: "fdcffde0-88dd-46b8-ab9d-224e83dd4a08"). InnerVolumeSpecName "kube-api-access-mwtlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:16.040987 master-0 kubenswrapper[31456]: I0312 21:29:16.037078 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-config-data" (OuterVolumeSpecName: "config-data") pod "fdcffde0-88dd-46b8-ab9d-224e83dd4a08" (UID: "fdcffde0-88dd-46b8-ab9d-224e83dd4a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:16.051728 master-0 kubenswrapper[31456]: I0312 21:29:16.050133 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdcffde0-88dd-46b8-ab9d-224e83dd4a08" (UID: "fdcffde0-88dd-46b8-ab9d-224e83dd4a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:16.098897 master-0 kubenswrapper[31456]: I0312 21:29:16.098829 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwtlg\" (UniqueName: \"kubernetes.io/projected/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-kube-api-access-mwtlg\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:16.098897 master-0 kubenswrapper[31456]: I0312 21:29:16.098881 31456 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-scripts\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:16.098897 master-0 kubenswrapper[31456]: I0312 21:29:16.098891 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:16.098897 master-0 kubenswrapper[31456]: I0312 21:29:16.098900 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcffde0-88dd-46b8-ab9d-224e83dd4a08-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:16.481622 master-0 kubenswrapper[31456]: I0312 21:29:16.481542 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5sjm6" event={"ID":"fdcffde0-88dd-46b8-ab9d-224e83dd4a08","Type":"ContainerDied","Data":"478f6d9b58b4d39b1d7fa49cf104a8bbea0036626824e57b87d1dbaf1bd81e04"} Mar 12 21:29:16.481622 master-0 kubenswrapper[31456]: I0312 21:29:16.481620 31456 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478f6d9b58b4d39b1d7fa49cf104a8bbea0036626824e57b87d1dbaf1bd81e04" Mar 12 21:29:16.481951 master-0 kubenswrapper[31456]: I0312 21:29:16.481634 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5sjm6" Mar 12 21:29:16.715771 master-0 kubenswrapper[31456]: I0312 21:29:16.715674 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:29:16.716262 master-0 kubenswrapper[31456]: I0312 21:29:16.716223 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" containerName="nova-scheduler-scheduler" containerID="cri-o://3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" gracePeriod=30 Mar 12 21:29:16.748101 master-0 kubenswrapper[31456]: I0312 21:29:16.747644 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:16.748101 master-0 kubenswrapper[31456]: I0312 21:29:16.747905 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-log" containerID="cri-o://0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f" gracePeriod=30 Mar 12 21:29:16.748101 master-0 kubenswrapper[31456]: I0312 21:29:16.748048 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-api" containerID="cri-o://8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308" gracePeriod=30 Mar 12 21:29:16.778831 master-0 kubenswrapper[31456]: I0312 21:29:16.776684 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:29:16.778831 master-0 kubenswrapper[31456]: I0312 21:29:16.776954 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-log" containerID="cri-o://d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797" gracePeriod=30 Mar 12 21:29:16.778831 master-0 kubenswrapper[31456]: I0312 21:29:16.777603 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-metadata" containerID="cri-o://4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a" gracePeriod=30 Mar 12 21:29:17.476009 master-0 kubenswrapper[31456]: I0312 21:29:17.475933 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:17.505973 master-0 kubenswrapper[31456]: I0312 21:29:17.505900 31456 generic.go:334] "Generic (PLEG): container finished" podID="ad5208d6-b244-47bf-80a9-bcae81587171" containerID="8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308" exitCode=0 Mar 12 21:29:17.505973 master-0 kubenswrapper[31456]: I0312 21:29:17.505956 31456 generic.go:334] "Generic (PLEG): container finished" podID="ad5208d6-b244-47bf-80a9-bcae81587171" containerID="0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f" exitCode=143 Mar 12 21:29:17.506248 master-0 kubenswrapper[31456]: I0312 21:29:17.506045 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad5208d6-b244-47bf-80a9-bcae81587171","Type":"ContainerDied","Data":"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308"} Mar 12 21:29:17.506248 master-0 kubenswrapper[31456]: I0312 21:29:17.506080 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad5208d6-b244-47bf-80a9-bcae81587171","Type":"ContainerDied","Data":"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f"} Mar 12 21:29:17.506248 master-0 kubenswrapper[31456]: I0312 21:29:17.506090 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad5208d6-b244-47bf-80a9-bcae81587171","Type":"ContainerDied","Data":"bcfb156a3d3830a6d4d7e0d210daa6e8710dcf72cc98cd0404dd928f39b00dbc"} Mar 12 21:29:17.506248 master-0 kubenswrapper[31456]: I0312 21:29:17.506107 31456 scope.go:117] "RemoveContainer" containerID="8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308" Mar 12 21:29:17.506424 master-0 kubenswrapper[31456]: I0312 21:29:17.506257 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:17.528222 master-0 kubenswrapper[31456]: I0312 21:29:17.517652 31456 generic.go:334] "Generic (PLEG): container finished" podID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerID="d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797" exitCode=143 Mar 12 21:29:17.528222 master-0 kubenswrapper[31456]: I0312 21:29:17.517707 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5d71af3-d4c9-4246-b9c2-276fe8433018","Type":"ContainerDied","Data":"d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797"} Mar 12 21:29:17.538118 master-0 kubenswrapper[31456]: I0312 21:29:17.538058 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-config-data\") pod \"ad5208d6-b244-47bf-80a9-bcae81587171\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " Mar 12 21:29:17.538118 master-0 kubenswrapper[31456]: I0312 21:29:17.538111 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-internal-tls-certs\") pod \"ad5208d6-b244-47bf-80a9-bcae81587171\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " Mar 12 21:29:17.538611 master-0 kubenswrapper[31456]: I0312 21:29:17.538259 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-public-tls-certs\") pod \"ad5208d6-b244-47bf-80a9-bcae81587171\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " Mar 12 21:29:17.538611 master-0 kubenswrapper[31456]: I0312 21:29:17.538381 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch52k\" (UniqueName: \"kubernetes.io/projected/ad5208d6-b244-47bf-80a9-bcae81587171-kube-api-access-ch52k\") pod \"ad5208d6-b244-47bf-80a9-bcae81587171\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " Mar 12 21:29:17.538611 master-0 kubenswrapper[31456]: I0312 21:29:17.538508 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad5208d6-b244-47bf-80a9-bcae81587171-logs\") pod \"ad5208d6-b244-47bf-80a9-bcae81587171\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " Mar 12 21:29:17.538611 master-0 kubenswrapper[31456]: I0312 21:29:17.538559 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-combined-ca-bundle\") pod \"ad5208d6-b244-47bf-80a9-bcae81587171\" (UID: \"ad5208d6-b244-47bf-80a9-bcae81587171\") " Mar 12 21:29:17.545455 master-0 kubenswrapper[31456]: I0312 21:29:17.545181 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad5208d6-b244-47bf-80a9-bcae81587171-logs" (OuterVolumeSpecName: "logs") pod "ad5208d6-b244-47bf-80a9-bcae81587171" (UID: "ad5208d6-b244-47bf-80a9-bcae81587171"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:29:17.560775 master-0 kubenswrapper[31456]: I0312 21:29:17.560716 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5208d6-b244-47bf-80a9-bcae81587171-kube-api-access-ch52k" (OuterVolumeSpecName: "kube-api-access-ch52k") pod "ad5208d6-b244-47bf-80a9-bcae81587171" (UID: "ad5208d6-b244-47bf-80a9-bcae81587171"). InnerVolumeSpecName "kube-api-access-ch52k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:17.591165 master-0 kubenswrapper[31456]: I0312 21:29:17.591091 31456 scope.go:117] "RemoveContainer" containerID="0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f" Mar 12 21:29:17.595802 master-0 kubenswrapper[31456]: I0312 21:29:17.595624 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad5208d6-b244-47bf-80a9-bcae81587171" (UID: "ad5208d6-b244-47bf-80a9-bcae81587171"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:17.616612 master-0 kubenswrapper[31456]: I0312 21:29:17.616421 31456 scope.go:117] "RemoveContainer" containerID="8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308" Mar 12 21:29:17.617044 master-0 kubenswrapper[31456]: E0312 21:29:17.616967 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308\": container with ID starting with 8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308 not found: ID does not exist" containerID="8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308" Mar 12 21:29:17.617146 master-0 kubenswrapper[31456]: I0312 21:29:17.617040 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308"} err="failed to get container status \"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308\": rpc error: code = NotFound desc = could not find container \"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308\": container with ID starting with 8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308 not found: ID does not exist" Mar 12 21:29:17.617146 master-0 kubenswrapper[31456]: I0312 21:29:17.617068 31456 scope.go:117] "RemoveContainer" containerID="0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f" Mar 12 21:29:17.617408 master-0 kubenswrapper[31456]: E0312 21:29:17.617384 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f\": container with ID starting with 0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f not found: ID does not exist" containerID="0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f" Mar 12 21:29:17.617504 master-0 kubenswrapper[31456]: I0312 21:29:17.617409 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f"} err="failed to get container status \"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f\": rpc error: code = NotFound desc = could not find container \"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f\": container with ID starting with 0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f not found: ID does not exist" Mar 12 21:29:17.617504 master-0 kubenswrapper[31456]: I0312 21:29:17.617423 31456 scope.go:117] "RemoveContainer" containerID="8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308" Mar 12 21:29:17.617629 master-0 kubenswrapper[31456]: I0312 21:29:17.617607 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308"} err="failed to get container status \"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308\": rpc error: code = NotFound desc = could not find container \"8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308\": container with ID starting with 8d507272ec79ea2b9d9a7d8853419653b2e5b0a128186174b8e64076d276f308 not found: ID does not exist" Mar 12 21:29:17.617629 master-0 kubenswrapper[31456]: I0312 21:29:17.617624 31456 scope.go:117] "RemoveContainer" containerID="0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f" Mar 12 21:29:17.617863 master-0 kubenswrapper[31456]: I0312 21:29:17.617793 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f"} err="failed to get container status \"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f\": rpc error: code = NotFound desc = could not find container \"0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f\": container with ID starting with 0860176bced341fbccae873fbf0133903d89730f96f8beb580d4b92e19380b6f not found: ID does not exist" Mar 12 21:29:17.628011 master-0 kubenswrapper[31456]: I0312 21:29:17.625428 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ad5208d6-b244-47bf-80a9-bcae81587171" (UID: "ad5208d6-b244-47bf-80a9-bcae81587171"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:17.628011 master-0 kubenswrapper[31456]: I0312 21:29:17.627966 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-config-data" (OuterVolumeSpecName: "config-data") pod "ad5208d6-b244-47bf-80a9-bcae81587171" (UID: "ad5208d6-b244-47bf-80a9-bcae81587171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:17.629080 master-0 kubenswrapper[31456]: I0312 21:29:17.629045 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ad5208d6-b244-47bf-80a9-bcae81587171" (UID: "ad5208d6-b244-47bf-80a9-bcae81587171"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:17.642764 master-0 kubenswrapper[31456]: I0312 21:29:17.642428 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad5208d6-b244-47bf-80a9-bcae81587171-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:17.642764 master-0 kubenswrapper[31456]: I0312 21:29:17.642496 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:17.642764 master-0 kubenswrapper[31456]: I0312 21:29:17.642579 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:17.642764 master-0 kubenswrapper[31456]: I0312 21:29:17.642595 31456 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:17.642764 master-0 kubenswrapper[31456]: I0312 21:29:17.642607 31456 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad5208d6-b244-47bf-80a9-bcae81587171-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:17.642764 master-0 kubenswrapper[31456]: I0312 21:29:17.642620 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch52k\" (UniqueName: \"kubernetes.io/projected/ad5208d6-b244-47bf-80a9-bcae81587171-kube-api-access-ch52k\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:17.845945 master-0 kubenswrapper[31456]: I0312 21:29:17.845875 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:17.863410 master-0 kubenswrapper[31456]: I0312 21:29:17.863338 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:17.949942 master-0 kubenswrapper[31456]: I0312 21:29:17.949879 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:17.950855 master-0 kubenswrapper[31456]: E0312 21:29:17.950788 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-log" Mar 12 21:29:17.950941 master-0 kubenswrapper[31456]: I0312 21:29:17.950930 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-log" Mar 12 21:29:17.951024 master-0 kubenswrapper[31456]: E0312 21:29:17.951014 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-api" Mar 12 21:29:17.951085 master-0 kubenswrapper[31456]: I0312 21:29:17.951075 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-api" Mar 12 21:29:17.951155 master-0 kubenswrapper[31456]: E0312 21:29:17.951145 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" containerName="dnsmasq-dns" Mar 12 21:29:17.951212 master-0 kubenswrapper[31456]: I0312 21:29:17.951203 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" containerName="dnsmasq-dns" Mar 12 21:29:17.951274 master-0 kubenswrapper[31456]: E0312 21:29:17.951265 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcffde0-88dd-46b8-ab9d-224e83dd4a08" containerName="nova-manage" Mar 12 21:29:17.951327 master-0 kubenswrapper[31456]: I0312 21:29:17.951318 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcffde0-88dd-46b8-ab9d-224e83dd4a08" containerName="nova-manage" Mar 12 21:29:17.951389 master-0 kubenswrapper[31456]: E0312 21:29:17.951380 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e015d284-5458-4e15-aa69-5a3dcc87352c" containerName="nova-manage" Mar 12 21:29:17.951449 master-0 kubenswrapper[31456]: I0312 21:29:17.951440 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="e015d284-5458-4e15-aa69-5a3dcc87352c" containerName="nova-manage" Mar 12 21:29:17.951657 master-0 kubenswrapper[31456]: E0312 21:29:17.951646 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" containerName="init" Mar 12 21:29:17.951712 master-0 kubenswrapper[31456]: I0312 21:29:17.951703 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" containerName="init" Mar 12 21:29:17.952056 master-0 kubenswrapper[31456]: I0312 21:29:17.952042 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcffde0-88dd-46b8-ab9d-224e83dd4a08" containerName="nova-manage" Mar 12 21:29:17.952139 master-0 kubenswrapper[31456]: I0312 21:29:17.952129 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-log" Mar 12 21:29:17.952287 master-0 kubenswrapper[31456]: I0312 21:29:17.952256 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93e5d01-b4a5-4612-bded-2615337961dc" containerName="dnsmasq-dns" Mar 12 21:29:17.952481 master-0 kubenswrapper[31456]: I0312 21:29:17.952424 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" containerName="nova-api-api" Mar 12 21:29:17.952581 master-0 kubenswrapper[31456]: I0312 21:29:17.952569 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="e015d284-5458-4e15-aa69-5a3dcc87352c" containerName="nova-manage" Mar 12 21:29:17.954559 master-0 kubenswrapper[31456]: I0312 21:29:17.954542 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:17.957749 master-0 kubenswrapper[31456]: I0312 21:29:17.957708 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 21:29:17.958363 master-0 kubenswrapper[31456]: I0312 21:29:17.958326 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 21:29:17.958547 master-0 kubenswrapper[31456]: I0312 21:29:17.958529 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 21:29:17.965169 master-0 kubenswrapper[31456]: I0312 21:29:17.965103 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:18.052913 master-0 kubenswrapper[31456]: I0312 21:29:18.051798 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.052913 master-0 kubenswrapper[31456]: I0312 21:29:18.051922 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-logs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.052913 master-0 kubenswrapper[31456]: I0312 21:29:18.052005 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-config-data\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.052913 master-0 kubenswrapper[31456]: I0312 21:29:18.052127 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.052913 master-0 kubenswrapper[31456]: I0312 21:29:18.052202 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.052913 master-0 kubenswrapper[31456]: I0312 21:29:18.052230 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8kvq\" (UniqueName: \"kubernetes.io/projected/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-kube-api-access-f8kvq\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.153834 master-0 kubenswrapper[31456]: I0312 21:29:18.153768 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-config-data\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.154039 master-0 kubenswrapper[31456]: I0312 21:29:18.153901 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.154039 master-0 kubenswrapper[31456]: I0312 21:29:18.153976 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.154039 master-0 kubenswrapper[31456]: I0312 21:29:18.154013 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8kvq\" (UniqueName: \"kubernetes.io/projected/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-kube-api-access-f8kvq\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.154155 master-0 kubenswrapper[31456]: I0312 21:29:18.154062 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.154155 master-0 kubenswrapper[31456]: I0312 21:29:18.154128 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-logs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.154652 master-0 kubenswrapper[31456]: I0312 21:29:18.154627 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-logs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.162481 master-0 kubenswrapper[31456]: I0312 21:29:18.157901 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-config-data\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.162481 master-0 kubenswrapper[31456]: I0312 21:29:18.158060 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.171663 master-0 kubenswrapper[31456]: I0312 21:29:18.171623 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.171752 master-0 kubenswrapper[31456]: I0312 21:29:18.171711 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.177306 master-0 kubenswrapper[31456]: I0312 21:29:18.177274 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8kvq\" (UniqueName: \"kubernetes.io/projected/0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288-kube-api-access-f8kvq\") pod \"nova-api-0\" (UID: \"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288\") " pod="openstack/nova-api-0" Mar 12 21:29:18.198963 master-0 kubenswrapper[31456]: E0312 21:29:18.198466 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 21:29:18.201001 master-0 kubenswrapper[31456]: E0312 21:29:18.200115 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 21:29:18.202890 master-0 kubenswrapper[31456]: E0312 21:29:18.201331 31456 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 21:29:18.202890 master-0 kubenswrapper[31456]: E0312 21:29:18.201376 31456 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" containerName="nova-scheduler-scheduler" Mar 12 21:29:18.294735 master-0 kubenswrapper[31456]: I0312 21:29:18.294669 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 21:29:18.840749 master-0 kubenswrapper[31456]: W0312 21:29:18.840672 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef54fd9_2e96_4d0a_a32b_1ffbd4fa0288.slice/crio-e32636bf0aa1eb104387e400c91808180bce40327d833cba3d8d7274f6c9feaa WatchSource:0}: Error finding container e32636bf0aa1eb104387e400c91808180bce40327d833cba3d8d7274f6c9feaa: Status 404 returned error can't find the container with id e32636bf0aa1eb104387e400c91808180bce40327d833cba3d8d7274f6c9feaa Mar 12 21:29:18.847650 master-0 kubenswrapper[31456]: I0312 21:29:18.847594 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 21:29:19.204700 master-0 kubenswrapper[31456]: I0312 21:29:19.204647 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5208d6-b244-47bf-80a9-bcae81587171" path="/var/lib/kubelet/pods/ad5208d6-b244-47bf-80a9-bcae81587171/volumes" Mar 12 21:29:19.560839 master-0 kubenswrapper[31456]: I0312 21:29:19.560663 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288","Type":"ContainerStarted","Data":"4f69c9204e04c7fd26fed85e7013411e12d2e9f6b2937f366fa30dd85e2a9654"} Mar 12 21:29:19.560839 master-0 kubenswrapper[31456]: I0312 21:29:19.560723 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288","Type":"ContainerStarted","Data":"140ac002582c30d2b599beb8e41f808dcdb9b8bc5bf99fed9b55fe148af69503"} Mar 12 21:29:19.560839 master-0 kubenswrapper[31456]: I0312 21:29:19.560734 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288","Type":"ContainerStarted","Data":"e32636bf0aa1eb104387e400c91808180bce40327d833cba3d8d7274f6c9feaa"} Mar 12 21:29:19.586423 master-0 kubenswrapper[31456]: I0312 21:29:19.586311 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.586286286 podStartE2EDuration="2.586286286s" podCreationTimestamp="2026-03-12 21:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:19.5835684 +0000 UTC m=+1220.658173748" watchObservedRunningTime="2026-03-12 21:29:19.586286286 +0000 UTC m=+1220.660891614" Mar 12 21:29:19.913699 master-0 kubenswrapper[31456]: I0312 21:29:19.913626 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": read tcp 10.128.0.2:39222->10.128.1.7:8775: read: connection reset by peer" Mar 12 21:29:19.914386 master-0 kubenswrapper[31456]: I0312 21:29:19.913628 31456 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": read tcp 10.128.0.2:39226->10.128.1.7:8775: read: connection reset by peer" Mar 12 21:29:20.457637 master-0 kubenswrapper[31456]: I0312 21:29:20.457590 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:29:20.603982 master-0 kubenswrapper[31456]: I0312 21:29:20.603919 31456 generic.go:334] "Generic (PLEG): container finished" podID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerID="4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a" exitCode=0 Mar 12 21:29:20.604198 master-0 kubenswrapper[31456]: I0312 21:29:20.604012 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:29:20.604198 master-0 kubenswrapper[31456]: I0312 21:29:20.604033 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5d71af3-d4c9-4246-b9c2-276fe8433018","Type":"ContainerDied","Data":"4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a"} Mar 12 21:29:20.604198 master-0 kubenswrapper[31456]: I0312 21:29:20.604117 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d5d71af3-d4c9-4246-b9c2-276fe8433018","Type":"ContainerDied","Data":"b4575d1907a1c9da4e49132183b5a048d162a84087cd3067e912f36f682c9b56"} Mar 12 21:29:20.604198 master-0 kubenswrapper[31456]: I0312 21:29:20.604140 31456 scope.go:117] "RemoveContainer" containerID="4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a" Mar 12 21:29:20.624214 master-0 kubenswrapper[31456]: I0312 21:29:20.624094 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-nova-metadata-tls-certs\") pod \"d5d71af3-d4c9-4246-b9c2-276fe8433018\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " Mar 12 21:29:20.624214 master-0 kubenswrapper[31456]: I0312 21:29:20.624156 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfcj6\" (UniqueName: \"kubernetes.io/projected/d5d71af3-d4c9-4246-b9c2-276fe8433018-kube-api-access-cfcj6\") pod \"d5d71af3-d4c9-4246-b9c2-276fe8433018\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " Mar 12 21:29:20.624421 master-0 kubenswrapper[31456]: I0312 21:29:20.624297 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-combined-ca-bundle\") pod \"d5d71af3-d4c9-4246-b9c2-276fe8433018\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " Mar 12 21:29:20.624421 master-0 kubenswrapper[31456]: I0312 21:29:20.624395 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-config-data\") pod \"d5d71af3-d4c9-4246-b9c2-276fe8433018\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " Mar 12 21:29:20.624506 master-0 kubenswrapper[31456]: I0312 21:29:20.624452 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5d71af3-d4c9-4246-b9c2-276fe8433018-logs\") pod \"d5d71af3-d4c9-4246-b9c2-276fe8433018\" (UID: \"d5d71af3-d4c9-4246-b9c2-276fe8433018\") " Mar 12 21:29:20.625118 master-0 kubenswrapper[31456]: I0312 21:29:20.625068 31456 scope.go:117] "RemoveContainer" containerID="d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797" Mar 12 21:29:20.625341 master-0 kubenswrapper[31456]: I0312 21:29:20.625311 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5d71af3-d4c9-4246-b9c2-276fe8433018-logs" (OuterVolumeSpecName: "logs") pod "d5d71af3-d4c9-4246-b9c2-276fe8433018" (UID: "d5d71af3-d4c9-4246-b9c2-276fe8433018"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 21:29:20.626968 master-0 kubenswrapper[31456]: I0312 21:29:20.626916 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d71af3-d4c9-4246-b9c2-276fe8433018-kube-api-access-cfcj6" (OuterVolumeSpecName: "kube-api-access-cfcj6") pod "d5d71af3-d4c9-4246-b9c2-276fe8433018" (UID: "d5d71af3-d4c9-4246-b9c2-276fe8433018"). InnerVolumeSpecName "kube-api-access-cfcj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:20.653298 master-0 kubenswrapper[31456]: I0312 21:29:20.653234 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5d71af3-d4c9-4246-b9c2-276fe8433018" (UID: "d5d71af3-d4c9-4246-b9c2-276fe8433018"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:20.663793 master-0 kubenswrapper[31456]: I0312 21:29:20.663751 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-config-data" (OuterVolumeSpecName: "config-data") pod "d5d71af3-d4c9-4246-b9c2-276fe8433018" (UID: "d5d71af3-d4c9-4246-b9c2-276fe8433018"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:20.674021 master-0 kubenswrapper[31456]: I0312 21:29:20.673655 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d5d71af3-d4c9-4246-b9c2-276fe8433018" (UID: "d5d71af3-d4c9-4246-b9c2-276fe8433018"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:20.729294 master-0 kubenswrapper[31456]: I0312 21:29:20.729230 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:20.729696 master-0 kubenswrapper[31456]: I0312 21:29:20.729616 31456 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5d71af3-d4c9-4246-b9c2-276fe8433018-logs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:20.729696 master-0 kubenswrapper[31456]: I0312 21:29:20.729635 31456 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:20.729696 master-0 kubenswrapper[31456]: I0312 21:29:20.729647 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfcj6\" (UniqueName: \"kubernetes.io/projected/d5d71af3-d4c9-4246-b9c2-276fe8433018-kube-api-access-cfcj6\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:20.729696 master-0 kubenswrapper[31456]: I0312 21:29:20.729657 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5d71af3-d4c9-4246-b9c2-276fe8433018-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:20.748648 master-0 kubenswrapper[31456]: I0312 21:29:20.748603 31456 scope.go:117] "RemoveContainer" containerID="4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a" Mar 12 21:29:20.749075 master-0 kubenswrapper[31456]: E0312 21:29:20.749044 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a\": container with ID starting with 4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a not found: ID does not exist" containerID="4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a" Mar 12 21:29:20.749146 master-0 kubenswrapper[31456]: I0312 21:29:20.749086 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a"} err="failed to get container status \"4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a\": rpc error: code = NotFound desc = could not find container \"4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a\": container with ID starting with 4d088f32c8afa009a0e612016e17aa0a2ece7cd0f63535a281b03bedea18b47a not found: ID does not exist" Mar 12 21:29:20.749146 master-0 kubenswrapper[31456]: I0312 21:29:20.749109 31456 scope.go:117] "RemoveContainer" containerID="d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797" Mar 12 21:29:20.749390 master-0 kubenswrapper[31456]: E0312 21:29:20.749358 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797\": container with ID starting with d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797 not found: ID does not exist" containerID="d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797" Mar 12 21:29:20.749439 master-0 kubenswrapper[31456]: I0312 21:29:20.749389 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797"} err="failed to get container status \"d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797\": rpc error: code = NotFound desc = could not find container \"d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797\": container with ID starting with d37165bf6ef79ad5497be92e3b4ca2cea1c31caac560ee4b9b9e4870c8894797 not found: ID does not exist" Mar 12 21:29:20.947512 master-0 kubenswrapper[31456]: I0312 21:29:20.947135 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:29:20.958551 master-0 kubenswrapper[31456]: I0312 21:29:20.958505 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:29:20.982486 master-0 kubenswrapper[31456]: I0312 21:29:20.982432 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:29:20.983606 master-0 kubenswrapper[31456]: E0312 21:29:20.983585 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-metadata" Mar 12 21:29:20.983714 master-0 kubenswrapper[31456]: I0312 21:29:20.983683 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-metadata" Mar 12 21:29:20.983834 master-0 kubenswrapper[31456]: E0312 21:29:20.983820 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-log" Mar 12 21:29:20.983899 master-0 kubenswrapper[31456]: I0312 21:29:20.983889 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-log" Mar 12 21:29:20.984203 master-0 kubenswrapper[31456]: I0312 21:29:20.984188 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-metadata" Mar 12 21:29:20.984318 master-0 kubenswrapper[31456]: I0312 21:29:20.984305 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" containerName="nova-metadata-log" Mar 12 21:29:20.985529 master-0 kubenswrapper[31456]: I0312 21:29:20.985510 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:29:20.988596 master-0 kubenswrapper[31456]: I0312 21:29:20.987523 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 21:29:20.988596 master-0 kubenswrapper[31456]: I0312 21:29:20.987524 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 21:29:21.013920 master-0 kubenswrapper[31456]: I0312 21:29:21.013841 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:29:21.040349 master-0 kubenswrapper[31456]: I0312 21:29:21.040305 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.040776 master-0 kubenswrapper[31456]: I0312 21:29:21.040721 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0223e0-45d4-477e-9410-5b8c41acaf4e-logs\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.040903 master-0 kubenswrapper[31456]: I0312 21:29:21.040879 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-config-data\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.040959 master-0 kubenswrapper[31456]: I0312 21:29:21.040943 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgz6\" (UniqueName: \"kubernetes.io/projected/0e0223e0-45d4-477e-9410-5b8c41acaf4e-kube-api-access-4cgz6\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.041145 master-0 kubenswrapper[31456]: I0312 21:29:21.041127 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.143883 master-0 kubenswrapper[31456]: I0312 21:29:21.143775 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cgz6\" (UniqueName: \"kubernetes.io/projected/0e0223e0-45d4-477e-9410-5b8c41acaf4e-kube-api-access-4cgz6\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.144101 master-0 kubenswrapper[31456]: I0312 21:29:21.143998 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.144287 master-0 kubenswrapper[31456]: I0312 21:29:21.144250 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.144550 master-0 kubenswrapper[31456]: I0312 21:29:21.144511 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0223e0-45d4-477e-9410-5b8c41acaf4e-logs\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.144590 master-0 kubenswrapper[31456]: I0312 21:29:21.144572 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-config-data\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.145221 master-0 kubenswrapper[31456]: I0312 21:29:21.145180 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0223e0-45d4-477e-9410-5b8c41acaf4e-logs\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.148955 master-0 kubenswrapper[31456]: I0312 21:29:21.147947 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-config-data\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.148955 master-0 kubenswrapper[31456]: I0312 21:29:21.147982 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.149233 master-0 kubenswrapper[31456]: I0312 21:29:21.148971 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0223e0-45d4-477e-9410-5b8c41acaf4e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.163205 master-0 kubenswrapper[31456]: I0312 21:29:21.163147 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cgz6\" (UniqueName: \"kubernetes.io/projected/0e0223e0-45d4-477e-9410-5b8c41acaf4e-kube-api-access-4cgz6\") pod \"nova-metadata-0\" (UID: \"0e0223e0-45d4-477e-9410-5b8c41acaf4e\") " pod="openstack/nova-metadata-0" Mar 12 21:29:21.182973 master-0 kubenswrapper[31456]: I0312 21:29:21.182850 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d71af3-d4c9-4246-b9c2-276fe8433018" path="/var/lib/kubelet/pods/d5d71af3-d4c9-4246-b9c2-276fe8433018/volumes" Mar 12 21:29:21.306944 master-0 kubenswrapper[31456]: I0312 21:29:21.302613 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 21:29:21.824832 master-0 kubenswrapper[31456]: W0312 21:29:21.824733 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e0223e0_45d4_477e_9410_5b8c41acaf4e.slice/crio-7e5edc82435141964d162fc9ecc28e3b5f20c977589cc42c7f67f4c68bf2cd21 WatchSource:0}: Error finding container 7e5edc82435141964d162fc9ecc28e3b5f20c977589cc42c7f67f4c68bf2cd21: Status 404 returned error can't find the container with id 7e5edc82435141964d162fc9ecc28e3b5f20c977589cc42c7f67f4c68bf2cd21 Mar 12 21:29:21.826772 master-0 kubenswrapper[31456]: I0312 21:29:21.826710 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 21:29:22.350375 master-0 kubenswrapper[31456]: I0312 21:29:22.350331 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:29:22.496186 master-0 kubenswrapper[31456]: I0312 21:29:22.496107 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-combined-ca-bundle\") pod \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " Mar 12 21:29:22.496586 master-0 kubenswrapper[31456]: I0312 21:29:22.496341 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg4jc\" (UniqueName: \"kubernetes.io/projected/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-kube-api-access-xg4jc\") pod \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " Mar 12 21:29:22.496586 master-0 kubenswrapper[31456]: I0312 21:29:22.496472 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-config-data\") pod \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\" (UID: \"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26\") " Mar 12 21:29:22.500001 master-0 kubenswrapper[31456]: I0312 21:29:22.499940 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-kube-api-access-xg4jc" (OuterVolumeSpecName: "kube-api-access-xg4jc") pod "76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" (UID: "76f91a0f-2cf3-4800-bd3a-5d2de1d33b26"). InnerVolumeSpecName "kube-api-access-xg4jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:29:22.528462 master-0 kubenswrapper[31456]: I0312 21:29:22.528388 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-config-data" (OuterVolumeSpecName: "config-data") pod "76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" (UID: "76f91a0f-2cf3-4800-bd3a-5d2de1d33b26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:22.530678 master-0 kubenswrapper[31456]: I0312 21:29:22.530635 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" (UID: "76f91a0f-2cf3-4800-bd3a-5d2de1d33b26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:29:22.600305 master-0 kubenswrapper[31456]: I0312 21:29:22.599446 31456 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:22.600305 master-0 kubenswrapper[31456]: I0312 21:29:22.599512 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg4jc\" (UniqueName: \"kubernetes.io/projected/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-kube-api-access-xg4jc\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:22.600305 master-0 kubenswrapper[31456]: I0312 21:29:22.599534 31456 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26-config-data\") on node \"master-0\" DevicePath \"\"" Mar 12 21:29:22.650227 master-0 kubenswrapper[31456]: I0312 21:29:22.650164 31456 generic.go:334] "Generic (PLEG): container finished" podID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" exitCode=0 Mar 12 21:29:22.650419 master-0 kubenswrapper[31456]: I0312 21:29:22.650256 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:29:22.650419 master-0 kubenswrapper[31456]: I0312 21:29:22.650237 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26","Type":"ContainerDied","Data":"3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0"} Mar 12 21:29:22.650419 master-0 kubenswrapper[31456]: I0312 21:29:22.650325 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76f91a0f-2cf3-4800-bd3a-5d2de1d33b26","Type":"ContainerDied","Data":"fdb459a16d4771fde289f7d0f13432d428b0399cb81d0dfc55bad7ccd2483905"} Mar 12 21:29:22.650419 master-0 kubenswrapper[31456]: I0312 21:29:22.650344 31456 scope.go:117] "RemoveContainer" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" Mar 12 21:29:22.654738 master-0 kubenswrapper[31456]: I0312 21:29:22.654682 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e0223e0-45d4-477e-9410-5b8c41acaf4e","Type":"ContainerStarted","Data":"21de08a3f3c4aea1822f33df6b3af1c2bbc44bd2d60765a6cd70f64aed8cb797"} Mar 12 21:29:22.654738 master-0 kubenswrapper[31456]: I0312 21:29:22.654735 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e0223e0-45d4-477e-9410-5b8c41acaf4e","Type":"ContainerStarted","Data":"3ccd6c3e1129c763e2fea233552934e33156e8f4ea155d906d69f645c23b130e"} Mar 12 21:29:22.654896 master-0 kubenswrapper[31456]: I0312 21:29:22.654746 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e0223e0-45d4-477e-9410-5b8c41acaf4e","Type":"ContainerStarted","Data":"7e5edc82435141964d162fc9ecc28e3b5f20c977589cc42c7f67f4c68bf2cd21"} Mar 12 21:29:22.686879 master-0 kubenswrapper[31456]: I0312 21:29:22.686838 31456 scope.go:117] "RemoveContainer" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" Mar 12 21:29:22.689791 master-0 kubenswrapper[31456]: E0312 21:29:22.687989 31456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0\": container with ID starting with 3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0 not found: ID does not exist" containerID="3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0" Mar 12 21:29:22.689791 master-0 kubenswrapper[31456]: I0312 21:29:22.688051 31456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0"} err="failed to get container status \"3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0\": rpc error: code = NotFound desc = could not find container \"3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0\": container with ID starting with 3bb29a9cc80ef4a248b699d24ed5eea0f6b4679e718ea2f4f1caa9309dcca1c0 not found: ID does not exist" Mar 12 21:29:22.693552 master-0 kubenswrapper[31456]: I0312 21:29:22.693491 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.693472468 podStartE2EDuration="2.693472468s" podCreationTimestamp="2026-03-12 21:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:22.684753198 +0000 UTC m=+1223.759358526" watchObservedRunningTime="2026-03-12 21:29:22.693472468 +0000 UTC m=+1223.768077796" Mar 12 21:29:22.717901 master-0 kubenswrapper[31456]: I0312 21:29:22.717826 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:29:22.752982 master-0 kubenswrapper[31456]: I0312 21:29:22.752922 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:29:22.764648 master-0 kubenswrapper[31456]: I0312 21:29:22.764584 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:29:22.765519 master-0 kubenswrapper[31456]: E0312 21:29:22.765500 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" containerName="nova-scheduler-scheduler" Mar 12 21:29:22.765608 master-0 kubenswrapper[31456]: I0312 21:29:22.765596 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" containerName="nova-scheduler-scheduler" Mar 12 21:29:22.765950 master-0 kubenswrapper[31456]: I0312 21:29:22.765935 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" containerName="nova-scheduler-scheduler" Mar 12 21:29:22.766800 master-0 kubenswrapper[31456]: I0312 21:29:22.766778 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:29:22.768830 master-0 kubenswrapper[31456]: I0312 21:29:22.768779 31456 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 21:29:22.784836 master-0 kubenswrapper[31456]: I0312 21:29:22.784314 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:29:22.905695 master-0 kubenswrapper[31456]: I0312 21:29:22.905558 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4226dc78-2cc7-4c7c-88de-da93acad1688-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:22.905695 master-0 kubenswrapper[31456]: I0312 21:29:22.905634 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4226dc78-2cc7-4c7c-88de-da93acad1688-config-data\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:22.905936 master-0 kubenswrapper[31456]: I0312 21:29:22.905784 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2459\" (UniqueName: \"kubernetes.io/projected/4226dc78-2cc7-4c7c-88de-da93acad1688-kube-api-access-k2459\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.016120 master-0 kubenswrapper[31456]: I0312 21:29:23.015978 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2459\" (UniqueName: \"kubernetes.io/projected/4226dc78-2cc7-4c7c-88de-da93acad1688-kube-api-access-k2459\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.016642 master-0 kubenswrapper[31456]: I0312 21:29:23.016593 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4226dc78-2cc7-4c7c-88de-da93acad1688-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.016832 master-0 kubenswrapper[31456]: I0312 21:29:23.016714 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4226dc78-2cc7-4c7c-88de-da93acad1688-config-data\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.034532 master-0 kubenswrapper[31456]: I0312 21:29:23.030786 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4226dc78-2cc7-4c7c-88de-da93acad1688-config-data\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.036502 master-0 kubenswrapper[31456]: I0312 21:29:23.036431 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4226dc78-2cc7-4c7c-88de-da93acad1688-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.037025 master-0 kubenswrapper[31456]: I0312 21:29:23.036975 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2459\" (UniqueName: \"kubernetes.io/projected/4226dc78-2cc7-4c7c-88de-da93acad1688-kube-api-access-k2459\") pod \"nova-scheduler-0\" (UID: \"4226dc78-2cc7-4c7c-88de-da93acad1688\") " pod="openstack/nova-scheduler-0" Mar 12 21:29:23.102432 master-0 kubenswrapper[31456]: I0312 21:29:23.102379 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 21:29:23.188999 master-0 kubenswrapper[31456]: I0312 21:29:23.188634 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f91a0f-2cf3-4800-bd3a-5d2de1d33b26" path="/var/lib/kubelet/pods/76f91a0f-2cf3-4800-bd3a-5d2de1d33b26/volumes" Mar 12 21:29:23.583678 master-0 kubenswrapper[31456]: W0312 21:29:23.583600 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4226dc78_2cc7_4c7c_88de_da93acad1688.slice/crio-8a4e90c4986f0efca760dfba191e0f7dd74f544aa65662bd9c8ce985a0f1b598 WatchSource:0}: Error finding container 8a4e90c4986f0efca760dfba191e0f7dd74f544aa65662bd9c8ce985a0f1b598: Status 404 returned error can't find the container with id 8a4e90c4986f0efca760dfba191e0f7dd74f544aa65662bd9c8ce985a0f1b598 Mar 12 21:29:23.585507 master-0 kubenswrapper[31456]: I0312 21:29:23.585440 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 21:29:23.675024 master-0 kubenswrapper[31456]: I0312 21:29:23.674955 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4226dc78-2cc7-4c7c-88de-da93acad1688","Type":"ContainerStarted","Data":"8a4e90c4986f0efca760dfba191e0f7dd74f544aa65662bd9c8ce985a0f1b598"} Mar 12 21:29:24.700826 master-0 kubenswrapper[31456]: I0312 21:29:24.699248 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4226dc78-2cc7-4c7c-88de-da93acad1688","Type":"ContainerStarted","Data":"d6648cce6e8a7832632c8c53280d0ea0ebbf388b4815e881c88f8288f5fe4c00"} Mar 12 21:29:24.726945 master-0 kubenswrapper[31456]: I0312 21:29:24.726860 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.726831072 podStartE2EDuration="2.726831072s" podCreationTimestamp="2026-03-12 21:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:29:24.726584116 +0000 UTC m=+1225.801189444" watchObservedRunningTime="2026-03-12 21:29:24.726831072 +0000 UTC m=+1225.801436410" Mar 12 21:29:26.302877 master-0 kubenswrapper[31456]: I0312 21:29:26.302778 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 21:29:26.302877 master-0 kubenswrapper[31456]: I0312 21:29:26.302872 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 21:29:28.103456 master-0 kubenswrapper[31456]: I0312 21:29:28.103401 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 21:29:28.295354 master-0 kubenswrapper[31456]: I0312 21:29:28.295265 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 21:29:28.295354 master-0 kubenswrapper[31456]: I0312 21:29:28.295352 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 21:29:29.315158 master-0 kubenswrapper[31456]: I0312 21:29:29.315082 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.15:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:29:29.315708 master-0 kubenswrapper[31456]: I0312 21:29:29.315126 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ef54fd9-2e96-4d0a-a32b-1ffbd4fa0288" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.15:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:29:31.303245 master-0 kubenswrapper[31456]: I0312 21:29:31.303046 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 21:29:31.304090 master-0 kubenswrapper[31456]: I0312 21:29:31.303743 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 21:29:32.324327 master-0 kubenswrapper[31456]: I0312 21:29:32.324201 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e0223e0-45d4-477e-9410-5b8c41acaf4e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:29:32.325552 master-0 kubenswrapper[31456]: I0312 21:29:32.324229 31456 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e0223e0-45d4-477e-9410-5b8c41acaf4e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 21:29:33.103588 master-0 kubenswrapper[31456]: I0312 21:29:33.103508 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 21:29:33.142697 master-0 kubenswrapper[31456]: I0312 21:29:33.142652 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 21:29:33.907037 master-0 kubenswrapper[31456]: I0312 21:29:33.906891 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 21:29:38.307173 master-0 kubenswrapper[31456]: I0312 21:29:38.307060 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 21:29:38.308688 master-0 kubenswrapper[31456]: I0312 21:29:38.307858 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 21:29:38.309570 master-0 kubenswrapper[31456]: I0312 21:29:38.309516 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 21:29:38.320629 master-0 kubenswrapper[31456]: I0312 21:29:38.320578 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 21:29:38.947058 master-0 kubenswrapper[31456]: I0312 21:29:38.946997 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 21:29:38.957099 master-0 kubenswrapper[31456]: I0312 21:29:38.957036 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 21:29:41.311286 master-0 kubenswrapper[31456]: I0312 21:29:41.311185 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 21:29:41.318085 master-0 kubenswrapper[31456]: I0312 21:29:41.316672 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 21:29:41.322979 master-0 kubenswrapper[31456]: I0312 21:29:41.322919 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 21:29:42.022462 master-0 kubenswrapper[31456]: I0312 21:29:42.022403 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 21:30:08.738530 master-0 kubenswrapper[31456]: I0312 21:30:08.738448 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-ptvsb"] Mar 12 21:30:08.739763 master-0 kubenswrapper[31456]: I0312 21:30:08.738711 31456 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" podUID="418f109d-c5a7-4311-b90d-4f62478f3aba" containerName="sushy-emulator" containerID="cri-o://be6536a60dd6fc876d7d431d08a057cea01e6fa5e3d461d5944b279f6924fceb" gracePeriod=30 Mar 12 21:30:09.511469 master-0 kubenswrapper[31456]: I0312 21:30:09.511342 31456 generic.go:334] "Generic (PLEG): container finished" podID="418f109d-c5a7-4311-b90d-4f62478f3aba" containerID="be6536a60dd6fc876d7d431d08a057cea01e6fa5e3d461d5944b279f6924fceb" exitCode=0 Mar 12 21:30:09.511469 master-0 kubenswrapper[31456]: I0312 21:30:09.511393 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" event={"ID":"418f109d-c5a7-4311-b90d-4f62478f3aba","Type":"ContainerDied","Data":"be6536a60dd6fc876d7d431d08a057cea01e6fa5e3d461d5944b279f6924fceb"} Mar 12 21:30:09.645058 master-0 kubenswrapper[31456]: I0312 21:30:09.644452 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:30:09.757910 master-0 kubenswrapper[31456]: I0312 21:30:09.755142 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/418f109d-c5a7-4311-b90d-4f62478f3aba-os-client-config\") pod \"418f109d-c5a7-4311-b90d-4f62478f3aba\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " Mar 12 21:30:09.757910 master-0 kubenswrapper[31456]: I0312 21:30:09.755358 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6jh\" (UniqueName: \"kubernetes.io/projected/418f109d-c5a7-4311-b90d-4f62478f3aba-kube-api-access-bn6jh\") pod \"418f109d-c5a7-4311-b90d-4f62478f3aba\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " Mar 12 21:30:09.757910 master-0 kubenswrapper[31456]: I0312 21:30:09.755528 31456 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/418f109d-c5a7-4311-b90d-4f62478f3aba-sushy-emulator-config\") pod \"418f109d-c5a7-4311-b90d-4f62478f3aba\" (UID: \"418f109d-c5a7-4311-b90d-4f62478f3aba\") " Mar 12 21:30:09.792342 master-0 kubenswrapper[31456]: I0312 21:30:09.792218 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418f109d-c5a7-4311-b90d-4f62478f3aba-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "418f109d-c5a7-4311-b90d-4f62478f3aba" (UID: "418f109d-c5a7-4311-b90d-4f62478f3aba"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 21:30:09.793511 master-0 kubenswrapper[31456]: I0312 21:30:09.793128 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418f109d-c5a7-4311-b90d-4f62478f3aba-kube-api-access-bn6jh" (OuterVolumeSpecName: "kube-api-access-bn6jh") pod "418f109d-c5a7-4311-b90d-4f62478f3aba" (UID: "418f109d-c5a7-4311-b90d-4f62478f3aba"). InnerVolumeSpecName "kube-api-access-bn6jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 21:30:09.794214 master-0 kubenswrapper[31456]: I0312 21:30:09.794181 31456 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/418f109d-c5a7-4311-b90d-4f62478f3aba-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "418f109d-c5a7-4311-b90d-4f62478f3aba" (UID: "418f109d-c5a7-4311-b90d-4f62478f3aba"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 21:30:09.860588 master-0 kubenswrapper[31456]: I0312 21:30:09.858691 31456 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/418f109d-c5a7-4311-b90d-4f62478f3aba-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:30:09.860588 master-0 kubenswrapper[31456]: I0312 21:30:09.858740 31456 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6jh\" (UniqueName: \"kubernetes.io/projected/418f109d-c5a7-4311-b90d-4f62478f3aba-kube-api-access-bn6jh\") on node \"master-0\" DevicePath \"\"" Mar 12 21:30:09.860588 master-0 kubenswrapper[31456]: I0312 21:30:09.859157 31456 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/418f109d-c5a7-4311-b90d-4f62478f3aba-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 12 21:30:09.869905 master-0 kubenswrapper[31456]: I0312 21:30:09.869854 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-6759f57b8c-tbgcw"] Mar 12 21:30:09.870463 master-0 kubenswrapper[31456]: E0312 21:30:09.870434 31456 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418f109d-c5a7-4311-b90d-4f62478f3aba" containerName="sushy-emulator" Mar 12 21:30:09.870463 master-0 kubenswrapper[31456]: I0312 21:30:09.870455 31456 state_mem.go:107] "Deleted CPUSet assignment" podUID="418f109d-c5a7-4311-b90d-4f62478f3aba" containerName="sushy-emulator" Mar 12 21:30:09.870732 master-0 kubenswrapper[31456]: I0312 21:30:09.870705 31456 memory_manager.go:354] "RemoveStaleState removing state" podUID="418f109d-c5a7-4311-b90d-4f62478f3aba" containerName="sushy-emulator" Mar 12 21:30:09.871510 master-0 kubenswrapper[31456]: I0312 21:30:09.871481 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:09.923269 master-0 kubenswrapper[31456]: I0312 21:30:09.923215 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6759f57b8c-tbgcw"] Mar 12 21:30:09.961417 master-0 kubenswrapper[31456]: I0312 21:30:09.961323 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/87ae11f6-59b1-4eff-96e9-415e510f6b1c-os-client-config\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:09.961610 master-0 kubenswrapper[31456]: I0312 21:30:09.961432 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7cg\" (UniqueName: \"kubernetes.io/projected/87ae11f6-59b1-4eff-96e9-415e510f6b1c-kube-api-access-pb7cg\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:09.963936 master-0 kubenswrapper[31456]: I0312 21:30:09.961728 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/87ae11f6-59b1-4eff-96e9-415e510f6b1c-sushy-emulator-config\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.064843 master-0 kubenswrapper[31456]: I0312 21:30:10.064745 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/87ae11f6-59b1-4eff-96e9-415e510f6b1c-sushy-emulator-config\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.065116 master-0 kubenswrapper[31456]: I0312 21:30:10.065061 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/87ae11f6-59b1-4eff-96e9-415e510f6b1c-os-client-config\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.065201 master-0 kubenswrapper[31456]: I0312 21:30:10.065120 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb7cg\" (UniqueName: \"kubernetes.io/projected/87ae11f6-59b1-4eff-96e9-415e510f6b1c-kube-api-access-pb7cg\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.067923 master-0 kubenswrapper[31456]: I0312 21:30:10.066740 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/87ae11f6-59b1-4eff-96e9-415e510f6b1c-sushy-emulator-config\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.070184 master-0 kubenswrapper[31456]: I0312 21:30:10.070039 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/87ae11f6-59b1-4eff-96e9-415e510f6b1c-os-client-config\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.095923 master-0 kubenswrapper[31456]: I0312 21:30:10.095851 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb7cg\" (UniqueName: \"kubernetes.io/projected/87ae11f6-59b1-4eff-96e9-415e510f6b1c-kube-api-access-pb7cg\") pod \"sushy-emulator-6759f57b8c-tbgcw\" (UID: \"87ae11f6-59b1-4eff-96e9-415e510f6b1c\") " pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.212930 master-0 kubenswrapper[31456]: I0312 21:30:10.210519 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:10.532549 master-0 kubenswrapper[31456]: I0312 21:30:10.532489 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" event={"ID":"418f109d-c5a7-4311-b90d-4f62478f3aba","Type":"ContainerDied","Data":"25be3904f6ee43aca877599385df3ba6090d9e495b87521df7edc191b0b00ebf"} Mar 12 21:30:10.532763 master-0 kubenswrapper[31456]: I0312 21:30:10.532558 31456 scope.go:117] "RemoveContainer" containerID="be6536a60dd6fc876d7d431d08a057cea01e6fa5e3d461d5944b279f6924fceb" Mar 12 21:30:10.532763 master-0 kubenswrapper[31456]: I0312 21:30:10.532717 31456 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-6dd6777c94-ptvsb" Mar 12 21:30:10.582849 master-0 kubenswrapper[31456]: I0312 21:30:10.582726 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-ptvsb"] Mar 12 21:30:10.595244 master-0 kubenswrapper[31456]: I0312 21:30:10.595179 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-6dd6777c94-ptvsb"] Mar 12 21:30:10.912379 master-0 kubenswrapper[31456]: I0312 21:30:10.912313 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-6759f57b8c-tbgcw"] Mar 12 21:30:10.915333 master-0 kubenswrapper[31456]: W0312 21:30:10.915214 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87ae11f6_59b1_4eff_96e9_415e510f6b1c.slice/crio-c962813b649495d23d86459866f9816190d5a247855b95403f098a59ff994034 WatchSource:0}: Error finding container c962813b649495d23d86459866f9816190d5a247855b95403f098a59ff994034: Status 404 returned error can't find the container with id c962813b649495d23d86459866f9816190d5a247855b95403f098a59ff994034 Mar 12 21:30:11.185033 master-0 kubenswrapper[31456]: I0312 21:30:11.184980 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418f109d-c5a7-4311-b90d-4f62478f3aba" path="/var/lib/kubelet/pods/418f109d-c5a7-4311-b90d-4f62478f3aba/volumes" Mar 12 21:30:11.551916 master-0 kubenswrapper[31456]: I0312 21:30:11.550223 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" event={"ID":"87ae11f6-59b1-4eff-96e9-415e510f6b1c","Type":"ContainerStarted","Data":"ab0e3d9ba619a43a4a3c235ebfd3ea46771c0494fb96b9dadcc02566a66e217f"} Mar 12 21:30:11.551916 master-0 kubenswrapper[31456]: I0312 21:30:11.550292 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" event={"ID":"87ae11f6-59b1-4eff-96e9-415e510f6b1c","Type":"ContainerStarted","Data":"c962813b649495d23d86459866f9816190d5a247855b95403f098a59ff994034"} Mar 12 21:30:11.627560 master-0 kubenswrapper[31456]: I0312 21:30:11.627438 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" podStartSLOduration=2.627410465 podStartE2EDuration="2.627410465s" podCreationTimestamp="2026-03-12 21:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 21:30:11.609963123 +0000 UTC m=+1272.684568451" watchObservedRunningTime="2026-03-12 21:30:11.627410465 +0000 UTC m=+1272.702015803" Mar 12 21:30:20.211185 master-0 kubenswrapper[31456]: I0312 21:30:20.211068 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:20.212876 master-0 kubenswrapper[31456]: I0312 21:30:20.212788 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:20.228471 master-0 kubenswrapper[31456]: I0312 21:30:20.228290 31456 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:30:20.695350 master-0 kubenswrapper[31456]: I0312 21:30:20.695274 31456 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-6759f57b8c-tbgcw" Mar 12 21:32:01.910655 master-0 kubenswrapper[31456]: I0312 21:32:01.910347 31456 scope.go:117] "RemoveContainer" containerID="a35ebfcc2709827b1180ef73b6afd5b353b8cdf853d06cdb7a17e961e08a7eac" Mar 12 21:32:01.950104 master-0 kubenswrapper[31456]: I0312 21:32:01.950044 31456 scope.go:117] "RemoveContainer" containerID="43908fb2f48712b220851bfeca566a58603e81c2cc16fc84de8b762f83d42080" Mar 12 21:32:01.991510 master-0 kubenswrapper[31456]: I0312 21:32:01.991457 31456 scope.go:117] "RemoveContainer" containerID="ab9ab84e7a4d103c0a683da471112cc713dcba501122eb13e2ab4f9d139682af" Mar 12 21:32:02.040328 master-0 kubenswrapper[31456]: I0312 21:32:02.037298 31456 scope.go:117] "RemoveContainer" containerID="88107639b34c604dbd609853ad95e79e0392a97cc72fc2d2498d7c90bc383d59" Mar 12 21:32:33.232023 master-0 kubenswrapper[31456]: E0312 21:32:33.231943 31456 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:58454->192.168.32.10:43049: write tcp 192.168.32.10:58454->192.168.32.10:43049: write: broken pipe Mar 12 21:33:02.216282 master-0 kubenswrapper[31456]: I0312 21:33:02.216154 31456 scope.go:117] "RemoveContainer" containerID="c57024656546ec8e36c2613e9b153874dade0ea43e1d084b92484464205d1a1b" Mar 12 21:33:02.270527 master-0 kubenswrapper[31456]: I0312 21:33:02.270165 31456 scope.go:117] "RemoveContainer" containerID="99e1a7f7eb742af34c9dc5d5601c8e98d7b3792e2ab3e49ce401e0f211575ebe" Mar 12 21:33:02.308366 master-0 kubenswrapper[31456]: I0312 21:33:02.308282 31456 scope.go:117] "RemoveContainer" containerID="c34c2d3d85dad067e1714f21d40fd44ec510d8b1b3f2f078818dc94b8ef898b1" Mar 12 21:33:02.339634 master-0 kubenswrapper[31456]: I0312 21:33:02.339501 31456 scope.go:117] "RemoveContainer" containerID="403b90ec5dfcef764d4a83fbf5130171248f5d90498d607dce29da843ad25993" Mar 12 21:33:38.033895 master-0 kubenswrapper[31456]: I0312 21:33:38.033146 31456 trace.go:236] Trace[565395744]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (12-Mar-2026 21:33:35.690) (total time: 2341ms): Mar 12 21:33:38.033895 master-0 kubenswrapper[31456]: Trace[565395744]: [2.341973932s] [2.341973932s] END Mar 12 21:33:46.339273 master-0 kubenswrapper[31456]: E0312 21:33:46.339111 31456 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:51012->192.168.32.10:43049: write tcp 192.168.32.10:51012->192.168.32.10:43049: write: connection reset by peer Mar 12 21:34:02.476504 master-0 kubenswrapper[31456]: I0312 21:34:02.476413 31456 scope.go:117] "RemoveContainer" containerID="53adb381adb1795edc66ca13f74de477894b5f316ade505f65ef47c5308c197a" Mar 12 21:35:12.149062 master-0 kubenswrapper[31456]: I0312 21:35:12.148609 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-98d2-account-create-update-9vmzj"] Mar 12 21:35:12.172033 master-0 kubenswrapper[31456]: I0312 21:35:12.171958 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8xlhq"] Mar 12 21:35:12.187072 master-0 kubenswrapper[31456]: I0312 21:35:12.186986 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8xlhq"] Mar 12 21:35:12.200587 master-0 kubenswrapper[31456]: I0312 21:35:12.200497 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-98d2-account-create-update-9vmzj"] Mar 12 21:35:13.086139 master-0 kubenswrapper[31456]: I0312 21:35:13.078958 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2da3-account-create-update-kpcrn"] Mar 12 21:35:13.118477 master-0 kubenswrapper[31456]: I0312 21:35:13.112939 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2da3-account-create-update-kpcrn"] Mar 12 21:35:13.182979 master-0 kubenswrapper[31456]: I0312 21:35:13.182919 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345e92ee-81d9-4de3-9515-f901d1a3d153" path="/var/lib/kubelet/pods/345e92ee-81d9-4de3-9515-f901d1a3d153/volumes" Mar 12 21:35:13.184507 master-0 kubenswrapper[31456]: I0312 21:35:13.184477 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1d0bf8-4671-47dd-8f37-0c8b9136fdac" path="/var/lib/kubelet/pods/5f1d0bf8-4671-47dd-8f37-0c8b9136fdac/volumes" Mar 12 21:35:13.186525 master-0 kubenswrapper[31456]: I0312 21:35:13.186494 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622a9f92-1155-4b36-899c-965b404e7137" path="/var/lib/kubelet/pods/622a9f92-1155-4b36-899c-965b404e7137/volumes" Mar 12 21:35:14.062430 master-0 kubenswrapper[31456]: I0312 21:35:14.062164 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-74dr9"] Mar 12 21:35:14.078771 master-0 kubenswrapper[31456]: I0312 21:35:14.078674 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6a5a-account-create-update-4w5hn"] Mar 12 21:35:14.093932 master-0 kubenswrapper[31456]: I0312 21:35:14.093875 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-74dr9"] Mar 12 21:35:14.109113 master-0 kubenswrapper[31456]: I0312 21:35:14.109033 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6a5a-account-create-update-4w5hn"] Mar 12 21:35:14.122963 master-0 kubenswrapper[31456]: I0312 21:35:14.122871 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lp9x4"] Mar 12 21:35:14.139557 master-0 kubenswrapper[31456]: I0312 21:35:14.139497 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lp9x4"] Mar 12 21:35:15.194105 master-0 kubenswrapper[31456]: I0312 21:35:15.194020 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3690da76-6dfc-4f32-bb7f-8fb37175b867" path="/var/lib/kubelet/pods/3690da76-6dfc-4f32-bb7f-8fb37175b867/volumes" Mar 12 21:35:15.195763 master-0 kubenswrapper[31456]: I0312 21:35:15.195716 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d573798d-d096-47f4-96c7-8b7583a447d9" path="/var/lib/kubelet/pods/d573798d-d096-47f4-96c7-8b7583a447d9/volumes" Mar 12 21:35:15.196657 master-0 kubenswrapper[31456]: I0312 21:35:15.196604 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5327b01-7167-4072-967c-ea43996b1126" path="/var/lib/kubelet/pods/e5327b01-7167-4072-967c-ea43996b1126/volumes" Mar 12 21:35:19.035445 master-0 kubenswrapper[31456]: I0312 21:35:19.035353 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hmlwd"] Mar 12 21:35:19.069351 master-0 kubenswrapper[31456]: I0312 21:35:19.069270 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hmlwd"] Mar 12 21:35:19.184909 master-0 kubenswrapper[31456]: I0312 21:35:19.184728 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f78702-fbdb-480e-b0bc-88f60ea0e980" path="/var/lib/kubelet/pods/90f78702-fbdb-480e-b0bc-88f60ea0e980/volumes" Mar 12 21:35:42.100870 master-0 kubenswrapper[31456]: I0312 21:35:42.100783 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8df6-account-create-update-cmvwn"] Mar 12 21:35:42.117231 master-0 kubenswrapper[31456]: I0312 21:35:42.117147 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-23e5-account-create-update-qlhcj"] Mar 12 21:35:42.128530 master-0 kubenswrapper[31456]: I0312 21:35:42.128446 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pjn56"] Mar 12 21:35:42.139376 master-0 kubenswrapper[31456]: I0312 21:35:42.139305 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ssg44"] Mar 12 21:35:42.148836 master-0 kubenswrapper[31456]: I0312 21:35:42.148756 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-23e5-account-create-update-qlhcj"] Mar 12 21:35:42.159420 master-0 kubenswrapper[31456]: I0312 21:35:42.159357 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pjn56"] Mar 12 21:35:42.177035 master-0 kubenswrapper[31456]: I0312 21:35:42.176957 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8df6-account-create-update-cmvwn"] Mar 12 21:35:42.503753 master-0 kubenswrapper[31456]: I0312 21:35:42.198299 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ssg44"] Mar 12 21:35:43.218546 master-0 kubenswrapper[31456]: I0312 21:35:43.218487 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30" path="/var/lib/kubelet/pods/0c05e4bb-1dfc-47d7-b9f0-0c2fc22c8b30/volumes" Mar 12 21:35:43.221553 master-0 kubenswrapper[31456]: I0312 21:35:43.221525 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5b7eb2-f871-440e-889f-dd23a4a1e8ed" path="/var/lib/kubelet/pods/2f5b7eb2-f871-440e-889f-dd23a4a1e8ed/volumes" Mar 12 21:35:43.222709 master-0 kubenswrapper[31456]: I0312 21:35:43.222684 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c813ae4-0bfc-4a61-b602-9ce03baad036" path="/var/lib/kubelet/pods/6c813ae4-0bfc-4a61-b602-9ce03baad036/volumes" Mar 12 21:35:43.223668 master-0 kubenswrapper[31456]: I0312 21:35:43.223643 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01f2e87-21e3-433f-a65d-d6f66e6dd1f9" path="/var/lib/kubelet/pods/a01f2e87-21e3-433f-a65d-d6f66e6dd1f9/volumes" Mar 12 21:35:48.071982 master-0 kubenswrapper[31456]: I0312 21:35:48.070863 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fthjz"] Mar 12 21:35:48.086836 master-0 kubenswrapper[31456]: I0312 21:35:48.081875 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fthjz"] Mar 12 21:35:49.183968 master-0 kubenswrapper[31456]: I0312 21:35:49.183872 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0472a9-9d25-4efe-9032-c8afdc106678" path="/var/lib/kubelet/pods/eb0472a9-9d25-4efe-9032-c8afdc106678/volumes" Mar 12 21:35:50.043181 master-0 kubenswrapper[31456]: I0312 21:35:50.042935 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qsh5p"] Mar 12 21:35:50.056463 master-0 kubenswrapper[31456]: I0312 21:35:50.056391 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qsh5p"] Mar 12 21:35:51.196964 master-0 kubenswrapper[31456]: I0312 21:35:51.196916 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b67fa12-637c-4880-b717-d46e768d3112" path="/var/lib/kubelet/pods/6b67fa12-637c-4880-b717-d46e768d3112/volumes" Mar 12 21:35:56.054185 master-0 kubenswrapper[31456]: I0312 21:35:56.054068 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-tbph7"] Mar 12 21:35:56.070213 master-0 kubenswrapper[31456]: I0312 21:35:56.070107 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-tbph7"] Mar 12 21:35:57.081829 master-0 kubenswrapper[31456]: I0312 21:35:57.078071 31456 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-31cc-account-create-update-pzkcd"] Mar 12 21:35:57.097864 master-0 kubenswrapper[31456]: I0312 21:35:57.097787 31456 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-31cc-account-create-update-pzkcd"] Mar 12 21:35:57.187007 master-0 kubenswrapper[31456]: I0312 21:35:57.186945 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c569c591-2b26-40b5-b7d0-139ad6d98ea3" path="/var/lib/kubelet/pods/c569c591-2b26-40b5-b7d0-139ad6d98ea3/volumes" Mar 12 21:35:57.189125 master-0 kubenswrapper[31456]: I0312 21:35:57.189091 31456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd24a59e-fd16-4b56-acb2-3129dab7977a" path="/var/lib/kubelet/pods/dd24a59e-fd16-4b56-acb2-3129dab7977a/volumes" Mar 12 21:36:00.575835 master-0 kubenswrapper[31456]: I0312 21:36:00.575726 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dvqh8/must-gather-68x29"] Mar 12 21:36:00.581734 master-0 kubenswrapper[31456]: I0312 21:36:00.581104 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.592070 master-0 kubenswrapper[31456]: I0312 21:36:00.592029 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dvqh8"/"kube-root-ca.crt" Mar 12 21:36:00.592253 master-0 kubenswrapper[31456]: I0312 21:36:00.592243 31456 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dvqh8"/"openshift-service-ca.crt" Mar 12 21:36:00.609544 master-0 kubenswrapper[31456]: I0312 21:36:00.607717 31456 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dvqh8/must-gather-k8nx4"] Mar 12 21:36:00.610715 master-0 kubenswrapper[31456]: I0312 21:36:00.610682 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.645914 master-0 kubenswrapper[31456]: I0312 21:36:00.643817 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dvqh8/must-gather-68x29"] Mar 12 21:36:00.671885 master-0 kubenswrapper[31456]: I0312 21:36:00.664108 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dvqh8/must-gather-k8nx4"] Mar 12 21:36:00.756912 master-0 kubenswrapper[31456]: I0312 21:36:00.755557 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjkzz\" (UniqueName: \"kubernetes.io/projected/30306260-ddac-41d7-af0d-2e25a68aabba-kube-api-access-hjkzz\") pod \"must-gather-68x29\" (UID: \"30306260-ddac-41d7-af0d-2e25a68aabba\") " pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.756912 master-0 kubenswrapper[31456]: I0312 21:36:00.755625 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1d1a9f0a-083f-4797-8e9e-fcc2fed383b6-must-gather-output\") pod \"must-gather-k8nx4\" (UID: \"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6\") " pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.756912 master-0 kubenswrapper[31456]: I0312 21:36:00.755741 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwzgd\" (UniqueName: \"kubernetes.io/projected/1d1a9f0a-083f-4797-8e9e-fcc2fed383b6-kube-api-access-kwzgd\") pod \"must-gather-k8nx4\" (UID: \"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6\") " pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.756912 master-0 kubenswrapper[31456]: I0312 21:36:00.755855 31456 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30306260-ddac-41d7-af0d-2e25a68aabba-must-gather-output\") pod \"must-gather-68x29\" (UID: \"30306260-ddac-41d7-af0d-2e25a68aabba\") " pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.858044 master-0 kubenswrapper[31456]: I0312 21:36:00.857875 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjkzz\" (UniqueName: \"kubernetes.io/projected/30306260-ddac-41d7-af0d-2e25a68aabba-kube-api-access-hjkzz\") pod \"must-gather-68x29\" (UID: \"30306260-ddac-41d7-af0d-2e25a68aabba\") " pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.858044 master-0 kubenswrapper[31456]: I0312 21:36:00.857929 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1d1a9f0a-083f-4797-8e9e-fcc2fed383b6-must-gather-output\") pod \"must-gather-k8nx4\" (UID: \"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6\") " pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.858044 master-0 kubenswrapper[31456]: I0312 21:36:00.857985 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwzgd\" (UniqueName: \"kubernetes.io/projected/1d1a9f0a-083f-4797-8e9e-fcc2fed383b6-kube-api-access-kwzgd\") pod \"must-gather-k8nx4\" (UID: \"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6\") " pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.858044 master-0 kubenswrapper[31456]: I0312 21:36:00.858004 31456 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30306260-ddac-41d7-af0d-2e25a68aabba-must-gather-output\") pod \"must-gather-68x29\" (UID: \"30306260-ddac-41d7-af0d-2e25a68aabba\") " pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.858589 master-0 kubenswrapper[31456]: I0312 21:36:00.858554 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30306260-ddac-41d7-af0d-2e25a68aabba-must-gather-output\") pod \"must-gather-68x29\" (UID: \"30306260-ddac-41d7-af0d-2e25a68aabba\") " pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.859164 master-0 kubenswrapper[31456]: I0312 21:36:00.859133 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1d1a9f0a-083f-4797-8e9e-fcc2fed383b6-must-gather-output\") pod \"must-gather-k8nx4\" (UID: \"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6\") " pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.875644 master-0 kubenswrapper[31456]: I0312 21:36:00.875588 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjkzz\" (UniqueName: \"kubernetes.io/projected/30306260-ddac-41d7-af0d-2e25a68aabba-kube-api-access-hjkzz\") pod \"must-gather-68x29\" (UID: \"30306260-ddac-41d7-af0d-2e25a68aabba\") " pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.876580 master-0 kubenswrapper[31456]: I0312 21:36:00.876539 31456 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwzgd\" (UniqueName: \"kubernetes.io/projected/1d1a9f0a-083f-4797-8e9e-fcc2fed383b6-kube-api-access-kwzgd\") pod \"must-gather-k8nx4\" (UID: \"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6\") " pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:00.908040 master-0 kubenswrapper[31456]: I0312 21:36:00.907981 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dvqh8/must-gather-68x29" Mar 12 21:36:00.972785 master-0 kubenswrapper[31456]: I0312 21:36:00.961837 31456 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dvqh8/must-gather-k8nx4" Mar 12 21:36:01.490587 master-0 kubenswrapper[31456]: W0312 21:36:01.490502 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30306260_ddac_41d7_af0d_2e25a68aabba.slice/crio-5a69adb72bda57ec0712390265e5f8e6fcf1317cb183c8fbbb980b950a09ecb0 WatchSource:0}: Error finding container 5a69adb72bda57ec0712390265e5f8e6fcf1317cb183c8fbbb980b950a09ecb0: Status 404 returned error can't find the container with id 5a69adb72bda57ec0712390265e5f8e6fcf1317cb183c8fbbb980b950a09ecb0 Mar 12 21:36:01.493033 master-0 kubenswrapper[31456]: I0312 21:36:01.492984 31456 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 21:36:01.496758 master-0 kubenswrapper[31456]: I0312 21:36:01.496703 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dvqh8/must-gather-68x29"] Mar 12 21:36:01.525531 master-0 kubenswrapper[31456]: I0312 21:36:01.525453 31456 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dvqh8/must-gather-k8nx4"] Mar 12 21:36:01.532478 master-0 kubenswrapper[31456]: W0312 21:36:01.532399 31456 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d1a9f0a_083f_4797_8e9e_fcc2fed383b6.slice/crio-c72dba1e47712d9db55d05dd0478192a7ca2b256abcc7f96a3ce8a5f5e1628ca WatchSource:0}: Error finding container c72dba1e47712d9db55d05dd0478192a7ca2b256abcc7f96a3ce8a5f5e1628ca: Status 404 returned error can't find the container with id c72dba1e47712d9db55d05dd0478192a7ca2b256abcc7f96a3ce8a5f5e1628ca Mar 12 21:36:01.534105 master-0 kubenswrapper[31456]: I0312 21:36:01.534068 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dvqh8/must-gather-68x29" event={"ID":"30306260-ddac-41d7-af0d-2e25a68aabba","Type":"ContainerStarted","Data":"5a69adb72bda57ec0712390265e5f8e6fcf1317cb183c8fbbb980b950a09ecb0"} Mar 12 21:36:02.549622 master-0 kubenswrapper[31456]: I0312 21:36:02.549551 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dvqh8/must-gather-k8nx4" event={"ID":"1d1a9f0a-083f-4797-8e9e-fcc2fed383b6","Type":"ContainerStarted","Data":"c72dba1e47712d9db55d05dd0478192a7ca2b256abcc7f96a3ce8a5f5e1628ca"} Mar 12 21:36:02.605024 master-0 kubenswrapper[31456]: I0312 21:36:02.604401 31456 scope.go:117] "RemoveContainer" containerID="368468d679847a729afcf36bc52d6c60a0d0d285bc39d3167abddab4b80592d6" Mar 12 21:36:02.768197 master-0 kubenswrapper[31456]: I0312 21:36:02.767989 31456 scope.go:117] "RemoveContainer" containerID="14fff21a7d9dbf4a5984193139ace2fbeb2728de03ec2e9be2187e3c08ed0cf5" Mar 12 21:36:02.827463 master-0 kubenswrapper[31456]: I0312 21:36:02.827414 31456 scope.go:117] "RemoveContainer" containerID="0fd2349cbdd4661e3a761e69ecf1f97bc6949b388c5278129803d980b30d0aaf" Mar 12 21:36:02.874461 master-0 kubenswrapper[31456]: I0312 21:36:02.874065 31456 scope.go:117] "RemoveContainer" containerID="2a1d29e625a455a849f5f44af2128ef48040409183e58affc5f561b04d932fbe" Mar 12 21:36:02.933203 master-0 kubenswrapper[31456]: I0312 21:36:02.933174 31456 scope.go:117] "RemoveContainer" containerID="3cb8519dfb833b88250e694e34022a9d89b58497447e2f2b8b5af44503d2211d" Mar 12 21:36:02.980777 master-0 kubenswrapper[31456]: I0312 21:36:02.980600 31456 scope.go:117] "RemoveContainer" containerID="fa0f1c5c5a003d8e76d2299441db75e0fb7c3826893c3b310ec3fd7a7d0b6c58" Mar 12 21:36:03.088370 master-0 kubenswrapper[31456]: I0312 21:36:03.088311 31456 scope.go:117] "RemoveContainer" containerID="27bd3bfeda1473c6bb7069c8ab315a11b123a571a43726fa26ac4fc1249375d1" Mar 12 21:36:03.115143 master-0 kubenswrapper[31456]: I0312 21:36:03.114478 31456 scope.go:117] "RemoveContainer" containerID="62f6b60066e2a983f4f53dd62f58c0e0b3609fcdcd8b19fd681c89d45293f605" Mar 12 21:36:03.142718 master-0 kubenswrapper[31456]: I0312 21:36:03.142649 31456 scope.go:117] "RemoveContainer" containerID="4bcbd62e729b9826a2f3cab447b9ce5bd8f4cd03d061634e742175bfe5cd8361" Mar 12 21:36:03.177391 master-0 kubenswrapper[31456]: I0312 21:36:03.177337 31456 scope.go:117] "RemoveContainer" containerID="636fa020d32ac292f4db5f9c08359c4143f5d1347d4c61e2b448491ab3aabc57" Mar 12 21:36:03.205576 master-0 kubenswrapper[31456]: I0312 21:36:03.205192 31456 scope.go:117] "RemoveContainer" containerID="eba87c32798ea27e11c0f3cf772e678c9622bf0d7873bd044359cc9c807ec6d8" Mar 12 21:36:03.243889 master-0 kubenswrapper[31456]: I0312 21:36:03.243768 31456 scope.go:117] "RemoveContainer" containerID="436771826eb4c47061b96fe6ffe53f5f6aff148cb6dd111eeac742d88f7330d0" Mar 12 21:36:03.278930 master-0 kubenswrapper[31456]: I0312 21:36:03.278870 31456 scope.go:117] "RemoveContainer" containerID="f0f75010363ea1d1b63b0c48cf5b36b2d580f290ca8eb13143657336358bc9b9" Mar 12 21:36:03.318629 master-0 kubenswrapper[31456]: I0312 21:36:03.318583 31456 scope.go:117] "RemoveContainer" containerID="0a4625afa4a66eefb02168cff5c642c57587b055adc60d29d0140dde0ef67a31" Mar 12 21:36:03.348508 master-0 kubenswrapper[31456]: I0312 21:36:03.348472 31456 scope.go:117] "RemoveContainer" containerID="d09033cace82f619daa829511df84f7c468ae7702a5f6ce5677bb8ec138049a9" Mar 12 21:36:03.574342 master-0 kubenswrapper[31456]: I0312 21:36:03.574283 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dvqh8/must-gather-68x29" event={"ID":"30306260-ddac-41d7-af0d-2e25a68aabba","Type":"ContainerStarted","Data":"25e31ef87144cd40119acccd3aeb0104a63cb222349e3c560420146cdf566d4f"} Mar 12 21:36:04.621077 master-0 kubenswrapper[31456]: I0312 21:36:04.620983 31456 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dvqh8/must-gather-68x29" event={"ID":"30306260-ddac-41d7-af0d-2e25a68aabba","Type":"ContainerStarted","Data":"ecf29fa33cd02cd50d94e47633661e5f2a8473c770f53c399994b3d1befdb0c9"} Mar 12 21:36:04.655877 master-0 kubenswrapper[31456]: I0312 21:36:04.655779 31456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dvqh8/must-gather-68x29" podStartSLOduration=3.321018558 podStartE2EDuration="4.655758027s" podCreationTimestamp="2026-03-12 21:36:00 +0000 UTC" firstStartedPulling="2026-03-12 21:36:01.492886416 +0000 UTC m=+1622.567491744" lastFinishedPulling="2026-03-12 21:36:02.827625895 +0000 UTC m=+1623.902231213" observedRunningTime="2026-03-12 21:36:04.644438752 +0000 UTC m=+1625.719044080" watchObservedRunningTime="2026-03-12 21:36:04.655758027 +0000 UTC m=+1625.730363355" Mar 12 21:36:07.539320 master-0 kubenswrapper[31456]: I0312 21:36:07.534881 31456 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-g4bkd_83368183-0368-44b1-9387-eed32b211988/cluster-version-operator/0.log"